Dec 04 09:34:52 localhost kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec 04 09:34:52 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 04 09:34:52 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 04 09:34:52 localhost kernel: BIOS-provided physical RAM map:
Dec 04 09:34:52 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 04 09:34:52 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 04 09:34:52 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 04 09:34:52 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 04 09:34:52 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 04 09:34:52 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 04 09:34:52 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 04 09:34:52 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 04 09:34:52 localhost kernel: NX (Execute Disable) protection: active
Dec 04 09:34:52 localhost kernel: APIC: Static calls initialized
Dec 04 09:34:52 localhost kernel: SMBIOS 2.8 present.
Dec 04 09:34:52 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 04 09:34:52 localhost kernel: Hypervisor detected: KVM
Dec 04 09:34:52 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 04 09:34:52 localhost kernel: kvm-clock: using sched offset of 3311035341 cycles
Dec 04 09:34:52 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 04 09:34:52 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 04 09:34:52 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 04 09:34:52 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 04 09:34:52 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 04 09:34:52 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 04 09:34:52 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 04 09:34:52 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 04 09:34:52 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 04 09:34:52 localhost kernel: Using GB pages for direct mapping
Dec 04 09:34:52 localhost kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec 04 09:34:52 localhost kernel: ACPI: Early table checksum verification disabled
Dec 04 09:34:52 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 04 09:34:52 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 04 09:34:52 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 04 09:34:52 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 04 09:34:52 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 04 09:34:52 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 04 09:34:52 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 04 09:34:52 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 04 09:34:52 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 04 09:34:52 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 04 09:34:52 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 04 09:34:52 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 04 09:34:52 localhost kernel: No NUMA configuration found
Dec 04 09:34:52 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 04 09:34:52 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 04 09:34:52 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 04 09:34:52 localhost kernel: Zone ranges:
Dec 04 09:34:52 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 04 09:34:52 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 04 09:34:52 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 04 09:34:52 localhost kernel:   Device   empty
Dec 04 09:34:52 localhost kernel: Movable zone start for each node
Dec 04 09:34:52 localhost kernel: Early memory node ranges
Dec 04 09:34:52 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 04 09:34:52 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 04 09:34:52 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 04 09:34:52 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 04 09:34:52 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 04 09:34:52 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 04 09:34:52 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 04 09:34:52 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 04 09:34:52 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 04 09:34:52 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 04 09:34:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 04 09:34:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 04 09:34:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 04 09:34:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 04 09:34:52 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 04 09:34:52 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 04 09:34:52 localhost kernel: TSC deadline timer available
Dec 04 09:34:52 localhost kernel: CPU topo: Max. logical packages:   8
Dec 04 09:34:52 localhost kernel: CPU topo: Max. logical dies:       8
Dec 04 09:34:52 localhost kernel: CPU topo: Max. dies per package:   1
Dec 04 09:34:52 localhost kernel: CPU topo: Max. threads per core:   1
Dec 04 09:34:52 localhost kernel: CPU topo: Num. cores per package:     1
Dec 04 09:34:52 localhost kernel: CPU topo: Num. threads per package:   1
Dec 04 09:34:52 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 04 09:34:52 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 04 09:34:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 04 09:34:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 04 09:34:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 04 09:34:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 04 09:34:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 04 09:34:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 04 09:34:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 04 09:34:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 04 09:34:52 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 04 09:34:52 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 04 09:34:52 localhost kernel: Booting paravirtualized kernel on KVM
Dec 04 09:34:52 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 04 09:34:52 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 04 09:34:52 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 04 09:34:52 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 04 09:34:52 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 04 09:34:52 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 04 09:34:52 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 04 09:34:52 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec 04 09:34:52 localhost kernel: random: crng init done
Dec 04 09:34:52 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 04 09:34:52 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 04 09:34:52 localhost kernel: Fallback order for Node 0: 0 
Dec 04 09:34:52 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 04 09:34:52 localhost kernel: Policy zone: Normal
Dec 04 09:34:52 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 04 09:34:52 localhost kernel: software IO TLB: area num 8.
Dec 04 09:34:52 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 04 09:34:52 localhost kernel: ftrace: allocating 49335 entries in 193 pages
Dec 04 09:34:52 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 04 09:34:52 localhost kernel: Dynamic Preempt: voluntary
Dec 04 09:34:52 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 04 09:34:52 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 04 09:34:52 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 04 09:34:52 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 04 09:34:52 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 04 09:34:52 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 04 09:34:52 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 04 09:34:52 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 04 09:34:52 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 04 09:34:52 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 04 09:34:52 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 04 09:34:52 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 04 09:34:52 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 04 09:34:52 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 04 09:34:52 localhost kernel: Console: colour VGA+ 80x25
Dec 04 09:34:52 localhost kernel: printk: console [ttyS0] enabled
Dec 04 09:34:52 localhost kernel: ACPI: Core revision 20230331
Dec 04 09:34:52 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 04 09:34:52 localhost kernel: x2apic enabled
Dec 04 09:34:52 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 04 09:34:52 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 04 09:34:52 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 04 09:34:52 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 04 09:34:52 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 04 09:34:52 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 04 09:34:52 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 04 09:34:52 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 04 09:34:52 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 04 09:34:52 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 04 09:34:52 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 04 09:34:52 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 04 09:34:52 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 04 09:34:52 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 04 09:34:52 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 04 09:34:52 localhost kernel: x86/bugs: return thunk changed
Dec 04 09:34:52 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 04 09:34:52 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 04 09:34:52 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 04 09:34:52 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 04 09:34:52 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 04 09:34:52 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 04 09:34:52 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 04 09:34:52 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 04 09:34:52 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 04 09:34:52 localhost kernel: landlock: Up and running.
Dec 04 09:34:52 localhost kernel: Yama: becoming mindful.
Dec 04 09:34:52 localhost kernel: SELinux:  Initializing.
Dec 04 09:34:52 localhost kernel: LSM support for eBPF active
Dec 04 09:34:52 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 04 09:34:52 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 04 09:34:52 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 04 09:34:52 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 04 09:34:52 localhost kernel: ... version:                0
Dec 04 09:34:52 localhost kernel: ... bit width:              48
Dec 04 09:34:52 localhost kernel: ... generic registers:      6
Dec 04 09:34:52 localhost kernel: ... value mask:             0000ffffffffffff
Dec 04 09:34:52 localhost kernel: ... max period:             00007fffffffffff
Dec 04 09:34:52 localhost kernel: ... fixed-purpose events:   0
Dec 04 09:34:52 localhost kernel: ... event mask:             000000000000003f
Dec 04 09:34:52 localhost kernel: signal: max sigframe size: 1776
Dec 04 09:34:52 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 04 09:34:52 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 04 09:34:52 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 04 09:34:52 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 04 09:34:52 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 04 09:34:52 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 04 09:34:52 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 04 09:34:52 localhost kernel: node 0 deferred pages initialised in 9ms
Dec 04 09:34:52 localhost kernel: Memory: 7763872K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec 04 09:34:52 localhost kernel: devtmpfs: initialized
Dec 04 09:34:52 localhost kernel: x86/mm: Memory block size: 128MB
Dec 04 09:34:52 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 04 09:34:52 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 04 09:34:52 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 04 09:34:52 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 04 09:34:52 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 04 09:34:52 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 04 09:34:52 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 04 09:34:52 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 04 09:34:52 localhost kernel: audit: type=2000 audit(1764840890.273:1): state=initialized audit_enabled=0 res=1
Dec 04 09:34:52 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 04 09:34:52 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 04 09:34:52 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 04 09:34:52 localhost kernel: cpuidle: using governor menu
Dec 04 09:34:52 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 04 09:34:52 localhost kernel: PCI: Using configuration type 1 for base access
Dec 04 09:34:52 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 04 09:34:52 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 04 09:34:52 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 04 09:34:52 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 04 09:34:52 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 04 09:34:52 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 04 09:34:52 localhost kernel: Demotion targets for Node 0: null
Dec 04 09:34:52 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 04 09:34:52 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 04 09:34:52 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 04 09:34:52 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 04 09:34:52 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 04 09:34:52 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 04 09:34:52 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 04 09:34:52 localhost kernel: ACPI: Interpreter enabled
Dec 04 09:34:52 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 04 09:34:52 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 04 09:34:52 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 04 09:34:52 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 04 09:34:52 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 04 09:34:52 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 04 09:34:52 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [3] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [4] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [5] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [6] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [7] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [8] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [9] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [10] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [11] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [12] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [13] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [14] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [15] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [16] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [17] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [18] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [19] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [20] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [21] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [22] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [23] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [24] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [25] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [26] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [27] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [28] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [29] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [30] registered
Dec 04 09:34:52 localhost kernel: acpiphp: Slot [31] registered
Dec 04 09:34:52 localhost kernel: PCI host bridge to bus 0000:00
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 04 09:34:52 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 04 09:34:52 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 04 09:34:52 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 04 09:34:52 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 04 09:34:52 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 04 09:34:52 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 04 09:34:52 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 04 09:34:52 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 04 09:34:52 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 04 09:34:52 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 04 09:34:52 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 04 09:34:52 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 04 09:34:52 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 04 09:34:52 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 04 09:34:52 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 04 09:34:52 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 04 09:34:52 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 04 09:34:52 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 04 09:34:52 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 04 09:34:52 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 04 09:34:52 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 04 09:34:52 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 04 09:34:52 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 04 09:34:52 localhost kernel: iommu: Default domain type: Translated
Dec 04 09:34:52 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 04 09:34:52 localhost kernel: SCSI subsystem initialized
Dec 04 09:34:52 localhost kernel: ACPI: bus type USB registered
Dec 04 09:34:52 localhost kernel: usbcore: registered new interface driver usbfs
Dec 04 09:34:52 localhost kernel: usbcore: registered new interface driver hub
Dec 04 09:34:52 localhost kernel: usbcore: registered new device driver usb
Dec 04 09:34:52 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 04 09:34:52 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 04 09:34:52 localhost kernel: PTP clock support registered
Dec 04 09:34:52 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 04 09:34:52 localhost kernel: NetLabel: Initializing
Dec 04 09:34:52 localhost kernel: NetLabel:  domain hash size = 128
Dec 04 09:34:52 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 04 09:34:52 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 04 09:34:52 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 04 09:34:52 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 04 09:34:52 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 04 09:34:52 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 04 09:34:52 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 04 09:34:52 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 04 09:34:52 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 04 09:34:52 localhost kernel: vgaarb: loaded
Dec 04 09:34:52 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 04 09:34:52 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 04 09:34:52 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 04 09:34:52 localhost kernel: pnp: PnP ACPI init
Dec 04 09:34:52 localhost kernel: pnp 00:03: [dma 2]
Dec 04 09:34:52 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 04 09:34:52 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 04 09:34:52 localhost kernel: NET: Registered PF_INET protocol family
Dec 04 09:34:52 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 04 09:34:52 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 04 09:34:52 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 04 09:34:52 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 04 09:34:52 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 04 09:34:52 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 04 09:34:52 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 04 09:34:52 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 04 09:34:52 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 04 09:34:52 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 04 09:34:52 localhost kernel: NET: Registered PF_XDP protocol family
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 04 09:34:52 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 04 09:34:52 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 04 09:34:52 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 04 09:34:52 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72413 usecs
Dec 04 09:34:52 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 04 09:34:52 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 04 09:34:52 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 04 09:34:52 localhost kernel: ACPI: bus type thunderbolt registered
Dec 04 09:34:52 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 04 09:34:52 localhost kernel: Initialise system trusted keyrings
Dec 04 09:34:52 localhost kernel: Key type blacklist registered
Dec 04 09:34:52 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 04 09:34:52 localhost kernel: zbud: loaded
Dec 04 09:34:52 localhost kernel: integrity: Platform Keyring initialized
Dec 04 09:34:52 localhost kernel: integrity: Machine keyring initialized
Dec 04 09:34:52 localhost kernel: Freeing initrd memory: 87804K
Dec 04 09:34:52 localhost kernel: NET: Registered PF_ALG protocol family
Dec 04 09:34:52 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 04 09:34:52 localhost kernel: Key type asymmetric registered
Dec 04 09:34:52 localhost kernel: Asymmetric key parser 'x509' registered
Dec 04 09:34:52 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 04 09:34:52 localhost kernel: io scheduler mq-deadline registered
Dec 04 09:34:52 localhost kernel: io scheduler kyber registered
Dec 04 09:34:52 localhost kernel: io scheduler bfq registered
Dec 04 09:34:52 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 04 09:34:52 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 04 09:34:52 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 04 09:34:52 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 04 09:34:52 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 04 09:34:52 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 04 09:34:52 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 04 09:34:52 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 04 09:34:52 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 04 09:34:52 localhost kernel: Non-volatile memory driver v1.3
Dec 04 09:34:52 localhost kernel: rdac: device handler registered
Dec 04 09:34:52 localhost kernel: hp_sw: device handler registered
Dec 04 09:34:52 localhost kernel: emc: device handler registered
Dec 04 09:34:52 localhost kernel: alua: device handler registered
Dec 04 09:34:52 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 04 09:34:52 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 04 09:34:52 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 04 09:34:52 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 04 09:34:52 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 04 09:34:52 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 04 09:34:52 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 04 09:34:52 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec 04 09:34:52 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 04 09:34:52 localhost kernel: hub 1-0:1.0: USB hub found
Dec 04 09:34:52 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 04 09:34:52 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 04 09:34:52 localhost kernel: usbserial: USB Serial support registered for generic
Dec 04 09:34:52 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 04 09:34:52 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 04 09:34:52 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 04 09:34:52 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 04 09:34:52 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 04 09:34:52 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 04 09:34:52 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 04 09:34:52 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-04T09:34:51 UTC (1764840891)
Dec 04 09:34:52 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 04 09:34:52 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 04 09:34:52 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 04 09:34:52 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 04 09:34:52 localhost kernel: usbcore: registered new interface driver usbhid
Dec 04 09:34:52 localhost kernel: usbhid: USB HID core driver
Dec 04 09:34:52 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 04 09:34:52 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 04 09:34:52 localhost kernel: Initializing XFRM netlink socket
Dec 04 09:34:52 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 04 09:34:52 localhost kernel: Segment Routing with IPv6
Dec 04 09:34:52 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 04 09:34:52 localhost kernel: mpls_gso: MPLS GSO support
Dec 04 09:34:52 localhost kernel: IPI shorthand broadcast: enabled
Dec 04 09:34:52 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 04 09:34:52 localhost kernel: AES CTR mode by8 optimization enabled
Dec 04 09:34:52 localhost kernel: sched_clock: Marking stable (1180001894, 151662798)->(1467886669, -136221977)
Dec 04 09:34:52 localhost kernel: registered taskstats version 1
Dec 04 09:34:52 localhost kernel: Loading compiled-in X.509 certificates
Dec 04 09:34:52 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 04 09:34:52 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 04 09:34:52 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 04 09:34:52 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 04 09:34:52 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 04 09:34:52 localhost kernel: Demotion targets for Node 0: null
Dec 04 09:34:52 localhost kernel: page_owner is disabled
Dec 04 09:34:52 localhost kernel: Key type .fscrypt registered
Dec 04 09:34:52 localhost kernel: Key type fscrypt-provisioning registered
Dec 04 09:34:52 localhost kernel: Key type big_key registered
Dec 04 09:34:52 localhost kernel: Key type encrypted registered
Dec 04 09:34:52 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 04 09:34:52 localhost kernel: Loading compiled-in module X.509 certificates
Dec 04 09:34:52 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 04 09:34:52 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 04 09:34:52 localhost kernel: ima: No architecture policies found
Dec 04 09:34:52 localhost kernel: evm: Initialising EVM extended attributes:
Dec 04 09:34:52 localhost kernel: evm: security.selinux
Dec 04 09:34:52 localhost kernel: evm: security.SMACK64 (disabled)
Dec 04 09:34:52 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 04 09:34:52 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 04 09:34:52 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 04 09:34:52 localhost kernel: evm: security.apparmor (disabled)
Dec 04 09:34:52 localhost kernel: evm: security.ima
Dec 04 09:34:52 localhost kernel: evm: security.capability
Dec 04 09:34:52 localhost kernel: evm: HMAC attrs: 0x1
Dec 04 09:34:52 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 04 09:34:52 localhost kernel: Running certificate verification RSA selftest
Dec 04 09:34:52 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 04 09:34:52 localhost kernel: Running certificate verification ECDSA selftest
Dec 04 09:34:52 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 04 09:34:52 localhost kernel: clk: Disabling unused clocks
Dec 04 09:34:52 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 04 09:34:52 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec 04 09:34:52 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 04 09:34:52 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec 04 09:34:52 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 04 09:34:52 localhost kernel: Run /init as init process
Dec 04 09:34:52 localhost kernel:   with arguments:
Dec 04 09:34:52 localhost kernel:     /init
Dec 04 09:34:52 localhost kernel:   with environment:
Dec 04 09:34:52 localhost kernel:     HOME=/
Dec 04 09:34:52 localhost kernel:     TERM=linux
Dec 04 09:34:52 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64
Dec 04 09:34:52 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 04 09:34:52 localhost systemd[1]: Detected virtualization kvm.
Dec 04 09:34:52 localhost systemd[1]: Detected architecture x86-64.
Dec 04 09:34:52 localhost systemd[1]: Running in initrd.
Dec 04 09:34:52 localhost systemd[1]: No hostname configured, using default hostname.
Dec 04 09:34:52 localhost systemd[1]: Hostname set to <localhost>.
Dec 04 09:34:52 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 04 09:34:52 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 04 09:34:52 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 04 09:34:52 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 04 09:34:52 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 04 09:34:52 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 04 09:34:52 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 04 09:34:52 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 04 09:34:52 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 04 09:34:52 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 04 09:34:52 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 04 09:34:52 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 04 09:34:52 localhost systemd[1]: Reached target Local File Systems.
Dec 04 09:34:52 localhost systemd[1]: Reached target Path Units.
Dec 04 09:34:52 localhost systemd[1]: Reached target Slice Units.
Dec 04 09:34:52 localhost systemd[1]: Reached target Swaps.
Dec 04 09:34:52 localhost systemd[1]: Reached target Timer Units.
Dec 04 09:34:52 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 04 09:34:52 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 04 09:34:52 localhost systemd[1]: Listening on Journal Socket.
Dec 04 09:34:52 localhost systemd[1]: Listening on udev Control Socket.
Dec 04 09:34:52 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 04 09:34:52 localhost systemd[1]: Reached target Socket Units.
Dec 04 09:34:52 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 04 09:34:52 localhost systemd[1]: Starting Journal Service...
Dec 04 09:34:52 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 04 09:34:52 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 04 09:34:52 localhost systemd[1]: Starting Create System Users...
Dec 04 09:34:52 localhost systemd[1]: Starting Setup Virtual Console...
Dec 04 09:34:52 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 04 09:34:52 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 04 09:34:52 localhost systemd-journald[310]: Journal started
Dec 04 09:34:52 localhost systemd-journald[310]: Runtime Journal (/run/log/journal/1f0bfa2dc9224848973a776654e5dc59) is 8.0M, max 153.6M, 145.6M free.
Dec 04 09:34:52 localhost systemd-sysusers[314]: Creating group 'users' with GID 100.
Dec 04 09:34:52 localhost systemd[1]: Started Journal Service.
Dec 04 09:34:52 localhost systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Dec 04 09:34:52 localhost systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 04 09:34:52 localhost systemd[1]: Finished Create System Users.
Dec 04 09:34:52 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 04 09:34:52 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 04 09:34:52 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 04 09:34:52 localhost systemd[1]: Finished Setup Virtual Console.
Dec 04 09:34:52 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 04 09:34:52 localhost systemd[1]: Starting dracut cmdline hook...
Dec 04 09:34:52 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 04 09:34:52 localhost dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Dec 04 09:34:52 localhost dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 04 09:34:52 localhost systemd[1]: Finished dracut cmdline hook.
Dec 04 09:34:52 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 04 09:34:52 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 04 09:34:52 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 04 09:34:52 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 04 09:34:52 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 04 09:34:52 localhost kernel: RPC: Registered udp transport module.
Dec 04 09:34:52 localhost kernel: RPC: Registered tcp transport module.
Dec 04 09:34:52 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 04 09:34:52 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 04 09:34:52 localhost rpc.statd[446]: Version 2.5.4 starting
Dec 04 09:34:52 localhost rpc.statd[446]: Initializing NSM state
Dec 04 09:34:52 localhost rpc.idmapd[451]: Setting log level to 0
Dec 04 09:34:52 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 04 09:34:53 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 04 09:34:53 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Dec 04 09:34:53 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 04 09:34:53 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 04 09:34:53 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 04 09:34:53 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 04 09:34:53 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 04 09:34:53 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 04 09:34:53 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 04 09:34:53 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 04 09:34:53 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 04 09:34:53 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 04 09:34:53 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 04 09:34:53 localhost systemd[1]: Reached target Network.
Dec 04 09:34:53 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 04 09:34:53 localhost systemd[1]: Starting dracut initqueue hook...
Dec 04 09:34:53 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 04 09:34:53 localhost systemd[1]: Reached target System Initialization.
Dec 04 09:34:53 localhost systemd[1]: Reached target Basic System.
Dec 04 09:34:53 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 04 09:34:53 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 04 09:34:53 localhost systemd-udevd[502]: Network interface NamePolicy= disabled on kernel command line.
Dec 04 09:34:53 localhost kernel:  vda: vda1
Dec 04 09:34:53 localhost kernel: libata version 3.00 loaded.
Dec 04 09:34:53 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 04 09:34:53 localhost kernel: scsi host0: ata_piix
Dec 04 09:34:53 localhost kernel: scsi host1: ata_piix
Dec 04 09:34:53 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 04 09:34:53 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 04 09:34:53 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 04 09:34:53 localhost systemd[1]: Reached target Initrd Root Device.
Dec 04 09:34:53 localhost kernel: ata1: found unknown device (class 0)
Dec 04 09:34:53 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 04 09:34:53 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 04 09:34:53 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 04 09:34:53 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 04 09:34:53 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 04 09:34:53 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 04 09:34:53 localhost systemd[1]: Finished dracut initqueue hook.
Dec 04 09:34:53 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 04 09:34:53 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 04 09:34:53 localhost systemd[1]: Reached target Remote File Systems.
Dec 04 09:34:53 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 04 09:34:53 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 04 09:34:53 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 04 09:34:53 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Dec 04 09:34:53 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 04 09:34:53 localhost systemd[1]: Mounting /sysroot...
Dec 04 09:34:54 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 04 09:34:54 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 04 09:34:54 localhost kernel: XFS (vda1): Ending clean mount
Dec 04 09:34:54 localhost systemd[1]: Mounted /sysroot.
Dec 04 09:34:54 localhost systemd[1]: Reached target Initrd Root File System.
Dec 04 09:34:54 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 04 09:34:54 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 04 09:34:54 localhost systemd[1]: Reached target Initrd File Systems.
Dec 04 09:34:54 localhost systemd[1]: Reached target Initrd Default Target.
Dec 04 09:34:54 localhost systemd[1]: Starting dracut mount hook...
Dec 04 09:34:54 localhost systemd[1]: Finished dracut mount hook.
Dec 04 09:34:54 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 04 09:34:54 localhost rpc.idmapd[451]: exiting on signal 15
Dec 04 09:34:54 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 04 09:34:54 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 04 09:34:54 localhost systemd[1]: Stopped target Network.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Timer Units.
Dec 04 09:34:54 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 04 09:34:54 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Basic System.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Path Units.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Remote File Systems.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Slice Units.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Socket Units.
Dec 04 09:34:54 localhost systemd[1]: Stopped target System Initialization.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Local File Systems.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Swaps.
Dec 04 09:34:54 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped dracut mount hook.
Dec 04 09:34:54 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 04 09:34:54 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 04 09:34:54 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 04 09:34:54 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 04 09:34:54 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 04 09:34:54 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 04 09:34:54 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 04 09:34:54 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 04 09:34:54 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 04 09:34:54 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 04 09:34:54 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 04 09:34:54 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 04 09:34:54 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Closed udev Control Socket.
Dec 04 09:34:54 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Closed udev Kernel Socket.
Dec 04 09:34:54 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 04 09:34:54 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 04 09:34:54 localhost systemd[1]: Starting Cleanup udev Database...
Dec 04 09:34:54 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 04 09:34:54 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 04 09:34:54 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Stopped Create System Users.
Dec 04 09:34:54 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 04 09:34:54 localhost systemd[1]: Finished Cleanup udev Database.
Dec 04 09:34:54 localhost systemd[1]: Reached target Switch Root.
Dec 04 09:34:54 localhost systemd[1]: Starting Switch Root...
Dec 04 09:34:54 localhost systemd[1]: Switching root.
Dec 04 09:34:54 localhost systemd-journald[310]: Received SIGTERM from PID 1 (systemd).
Dec 04 09:34:54 localhost systemd-journald[310]: Journal stopped
Dec 04 09:34:55 localhost kernel: audit: type=1404 audit(1764840894.708:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 04 09:34:55 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 09:34:55 localhost kernel: SELinux:  policy capability open_perms=1
Dec 04 09:34:55 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 09:34:55 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 04 09:34:55 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 09:34:55 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 09:34:55 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 09:34:55 localhost kernel: audit: type=1403 audit(1764840894.838:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 04 09:34:55 localhost systemd[1]: Successfully loaded SELinux policy in 132.259ms.
Dec 04 09:34:55 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.146ms.
Dec 04 09:34:55 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 04 09:34:55 localhost systemd[1]: Detected virtualization kvm.
Dec 04 09:34:55 localhost systemd[1]: Detected architecture x86-64.
Dec 04 09:34:55 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 09:34:55 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 04 09:34:55 localhost systemd[1]: Stopped Switch Root.
Dec 04 09:34:55 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 04 09:34:55 localhost systemd[1]: Created slice Slice /system/getty.
Dec 04 09:34:55 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 04 09:34:55 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 04 09:34:55 localhost systemd[1]: Created slice User and Session Slice.
Dec 04 09:34:55 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 04 09:34:55 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 04 09:34:55 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 04 09:34:55 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 04 09:34:55 localhost systemd[1]: Stopped target Switch Root.
Dec 04 09:34:55 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 04 09:34:55 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 04 09:34:55 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 04 09:34:55 localhost systemd[1]: Reached target Path Units.
Dec 04 09:34:55 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 04 09:34:55 localhost systemd[1]: Reached target Slice Units.
Dec 04 09:34:55 localhost systemd[1]: Reached target Swaps.
Dec 04 09:34:55 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 04 09:34:55 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 04 09:34:55 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 04 09:34:55 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 04 09:34:55 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 04 09:34:55 localhost systemd[1]: Listening on udev Control Socket.
Dec 04 09:34:55 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 04 09:34:55 localhost systemd[1]: Mounting Huge Pages File System...
Dec 04 09:34:55 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 04 09:34:55 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 04 09:34:55 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 04 09:34:55 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 04 09:34:55 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 04 09:34:55 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 04 09:34:55 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 04 09:34:55 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 04 09:34:55 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 04 09:34:55 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 04 09:34:55 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 04 09:34:55 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 04 09:34:55 localhost systemd[1]: Stopped Journal Service.
Dec 04 09:34:55 localhost systemd[1]: Starting Journal Service...
Dec 04 09:34:55 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 04 09:34:55 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 04 09:34:55 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 04 09:34:55 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 04 09:34:55 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 04 09:34:55 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 04 09:34:55 localhost kernel: fuse: init (API version 7.37)
Dec 04 09:34:55 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 04 09:34:55 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 04 09:34:55 localhost systemd[1]: Mounted Huge Pages File System.
Dec 04 09:34:55 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 04 09:34:55 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 04 09:34:55 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 04 09:34:55 localhost systemd-journald[680]: Journal started
Dec 04 09:34:55 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 04 09:34:55 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 04 09:34:55 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 04 09:34:55 localhost systemd[1]: Started Journal Service.
Dec 04 09:34:55 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 04 09:34:55 localhost kernel: ACPI: bus type drm_connector registered
Dec 04 09:34:55 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 04 09:34:55 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 04 09:34:55 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 04 09:34:55 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 04 09:34:55 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 04 09:34:55 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 04 09:34:55 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 04 09:34:55 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 04 09:34:55 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 04 09:34:55 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 04 09:34:55 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 04 09:34:55 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 04 09:34:55 localhost systemd[1]: Mounting FUSE Control File System...
Dec 04 09:34:55 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 04 09:34:55 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 04 09:34:55 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 04 09:34:55 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 04 09:34:55 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 04 09:34:55 localhost systemd[1]: Starting Create System Users...
Dec 04 09:34:55 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 04 09:34:55 localhost systemd[1]: Mounted FUSE Control File System.
Dec 04 09:34:55 localhost systemd-journald[680]: Received client request to flush runtime journal.
Dec 04 09:34:55 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 04 09:34:55 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 04 09:34:55 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 04 09:34:55 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 04 09:34:55 localhost systemd[1]: Finished Create System Users.
Dec 04 09:34:55 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 04 09:34:55 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 04 09:34:55 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 04 09:34:55 localhost systemd[1]: Reached target Local File Systems.
Dec 04 09:34:55 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 04 09:34:55 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 04 09:34:55 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 04 09:34:55 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 04 09:34:55 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 04 09:34:55 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 04 09:34:55 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 04 09:34:55 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Dec 04 09:34:55 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 04 09:34:55 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 04 09:34:55 localhost systemd[1]: Starting Security Auditing Service...
Dec 04 09:34:55 localhost systemd[1]: Starting RPC Bind...
Dec 04 09:34:55 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 04 09:34:55 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 04 09:34:55 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 04 09:34:55 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 04 09:34:55 localhost systemd[1]: Started RPC Bind.
Dec 04 09:34:55 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 04 09:34:55 localhost augenrules[710]: /sbin/augenrules: No change
Dec 04 09:34:55 localhost augenrules[725]: No rules
Dec 04 09:34:55 localhost augenrules[725]: enabled 1
Dec 04 09:34:55 localhost augenrules[725]: failure 1
Dec 04 09:34:55 localhost augenrules[725]: pid 705
Dec 04 09:34:55 localhost augenrules[725]: rate_limit 0
Dec 04 09:34:55 localhost augenrules[725]: backlog_limit 8192
Dec 04 09:34:55 localhost augenrules[725]: lost 0
Dec 04 09:34:55 localhost augenrules[725]: backlog 0
Dec 04 09:34:55 localhost augenrules[725]: backlog_wait_time 60000
Dec 04 09:34:55 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 04 09:34:55 localhost augenrules[725]: enabled 1
Dec 04 09:34:55 localhost augenrules[725]: failure 1
Dec 04 09:34:55 localhost augenrules[725]: pid 705
Dec 04 09:34:55 localhost augenrules[725]: rate_limit 0
Dec 04 09:34:55 localhost augenrules[725]: backlog_limit 8192
Dec 04 09:34:55 localhost augenrules[725]: lost 0
Dec 04 09:34:55 localhost augenrules[725]: backlog 2
Dec 04 09:34:55 localhost augenrules[725]: backlog_wait_time 60000
Dec 04 09:34:55 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 04 09:34:55 localhost augenrules[725]: enabled 1
Dec 04 09:34:55 localhost augenrules[725]: failure 1
Dec 04 09:34:55 localhost augenrules[725]: pid 705
Dec 04 09:34:55 localhost augenrules[725]: rate_limit 0
Dec 04 09:34:55 localhost augenrules[725]: backlog_limit 8192
Dec 04 09:34:55 localhost augenrules[725]: lost 0
Dec 04 09:34:55 localhost augenrules[725]: backlog 2
Dec 04 09:34:55 localhost augenrules[725]: backlog_wait_time 60000
Dec 04 09:34:55 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 04 09:34:55 localhost systemd[1]: Started Security Auditing Service.
Dec 04 09:34:55 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 04 09:34:55 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 04 09:34:56 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 04 09:34:56 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 04 09:34:56 localhost systemd[1]: Starting Update is Completed...
Dec 04 09:34:56 localhost systemd[1]: Finished Update is Completed.
Dec 04 09:34:56 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Dec 04 09:34:56 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 04 09:34:56 localhost systemd[1]: Reached target System Initialization.
Dec 04 09:34:56 localhost systemd[1]: Started dnf makecache --timer.
Dec 04 09:34:56 localhost systemd[1]: Started Daily rotation of log files.
Dec 04 09:34:56 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 04 09:34:56 localhost systemd[1]: Reached target Timer Units.
Dec 04 09:34:56 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 04 09:34:56 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 04 09:34:56 localhost systemd[1]: Reached target Socket Units.
Dec 04 09:34:56 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 04 09:34:56 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 04 09:34:56 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 04 09:34:56 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 04 09:34:56 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 04 09:34:56 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 04 09:34:56 localhost systemd-udevd[739]: Network interface NamePolicy= disabled on kernel command line.
Dec 04 09:34:56 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 04 09:34:56 localhost systemd[1]: Reached target Basic System.
Dec 04 09:34:56 localhost dbus-broker-lau[758]: Ready
Dec 04 09:34:56 localhost systemd[1]: Starting NTP client/server...
Dec 04 09:34:56 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 04 09:34:56 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 04 09:34:56 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 04 09:34:56 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 04 09:34:56 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 04 09:34:56 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 04 09:34:56 localhost chronyd[791]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 04 09:34:56 localhost chronyd[791]: Loaded 0 symmetric keys
Dec 04 09:34:56 localhost chronyd[791]: Using right/UTC timezone to obtain leap second data
Dec 04 09:34:56 localhost chronyd[791]: Loaded seccomp filter (level 2)
Dec 04 09:34:56 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 04 09:34:56 localhost systemd[1]: Started irqbalance daemon.
Dec 04 09:34:56 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 04 09:34:56 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 04 09:34:56 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 04 09:34:56 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 04 09:34:56 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 04 09:34:56 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 04 09:34:56 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 04 09:34:56 localhost systemd[1]: Starting User Login Management...
Dec 04 09:34:56 localhost systemd[1]: Started NTP client/server.
Dec 04 09:34:56 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 04 09:34:56 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 04 09:34:56 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 04 09:34:56 localhost kernel: Console: switching to colour dummy device 80x25
Dec 04 09:34:56 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 04 09:34:56 localhost kernel: [drm] features: -context_init
Dec 04 09:34:56 localhost kernel: [drm] number of scanouts: 1
Dec 04 09:34:56 localhost kernel: [drm] number of cap sets: 0
Dec 04 09:34:56 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 04 09:34:56 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 04 09:34:56 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 04 09:34:56 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 04 09:34:56 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 04 09:34:56 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 04 09:34:56 localhost kernel: kvm_amd: TSC scaling supported
Dec 04 09:34:56 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 04 09:34:56 localhost kernel: kvm_amd: Nested Paging enabled
Dec 04 09:34:56 localhost kernel: kvm_amd: LBR virtualization supported
Dec 04 09:34:56 localhost systemd-logind[798]: New seat seat0.
Dec 04 09:34:56 localhost systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 04 09:34:56 localhost systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 04 09:34:56 localhost systemd[1]: Started User Login Management.
Dec 04 09:34:56 localhost iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Dec 04 09:34:56 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 04 09:34:56 localhost cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 04 Dec 2025 09:34:56 +0000. Up 6.32 seconds.
Dec 04 09:34:56 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 04 09:34:56 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 04 09:34:56 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpfjn80l8m.mount: Deactivated successfully.
Dec 04 09:34:56 localhost systemd[1]: Starting Hostname Service...
Dec 04 09:34:57 localhost systemd[1]: Started Hostname Service.
Dec 04 09:34:57 np0005545273.novalocal systemd-hostnamed[856]: Hostname set to <np0005545273.novalocal> (static)
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Reached target Preparation for Network.
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Starting Network Manager...
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2110] NetworkManager (version 1.54.1-1.el9) is starting... (boot:df4fb9d0-81a4-4e5e-8b88-c0920d7ba5e9)
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2115] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2197] manager[0x55a174985080]: monitoring kernel firmware directory '/lib/firmware'.
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2235] hostname: hostname: using hostnamed
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2235] hostname: static hostname changed from (none) to "np0005545273.novalocal"
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2239] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2370] manager[0x55a174985080]: rfkill: Wi-Fi hardware radio set enabled
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2371] manager[0x55a174985080]: rfkill: WWAN hardware radio set enabled
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2423] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2424] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2425] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2426] manager: Networking is enabled by state file
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2428] settings: Loaded settings plugin: keyfile (internal)
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2439] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2467] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2483] dhcp: init: Using DHCP client 'internal'
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2486] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2507] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2517] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2527] device (lo): Activation: starting connection 'lo' (3cd632aa-e4f7-4e63-bb4d-c1d9ec185b32)
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2538] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2542] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2577] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2583] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2585] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2587] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2590] device (eth0): carrier: link connected
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2594] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2600] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2605] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2609] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2611] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2613] manager: NetworkManager state is now CONNECTING
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2614] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2621] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2624] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2677] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2684] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2702] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Started Network Manager.
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Reached target Network.
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2973] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2981] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.2983] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.3001] device (lo): Activation: successful, device activated.
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.3017] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.3023] manager: NetworkManager state is now CONNECTED_SITE
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.3030] device (eth0): Activation: successful, device activated.
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.3037] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 04 09:34:57 np0005545273.novalocal NetworkManager[860]: <info>  [1764840897.3040] manager: startup complete
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Reached target NFS client services.
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Reached target Remote File Systems.
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 04 09:34:57 np0005545273.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 04 Dec 2025 09:34:57 +0000. Up 7.24 seconds.
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.169         | 255.255.255.0 | global | fa:16:3e:e2:26:53 |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fee2:2653/64 |       .       |  link  | fa:16:3e:e2:26:53 |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 04 09:34:57 np0005545273.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 04 09:35:00 np0005545273.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Dec 04 09:35:00 np0005545273.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 04 09:35:00 np0005545273.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Dec 04 09:35:00 np0005545273.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Dec 04 09:35:00 np0005545273.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Dec 04 09:35:00 np0005545273.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: Generating public/private rsa key pair.
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: The key fingerprint is:
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: SHA256:SOKl5YOFJg3y4xP1gD+6MIF1nzlXlosXII9rK9Wj6X4 root@np0005545273.novalocal
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: The key's randomart image is:
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: +---[RSA 3072]----+
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |. ..o . .. .     |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: | oo+.+ +  =      |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |..=o=.Bo.+ o     |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |o. *oX=+o o      |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: | .o.+.BoS.       |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |o .. o = .       |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: | o .. +          |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |  .  o  E        |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |     .o.         |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: +----[SHA256]-----+
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: The key fingerprint is:
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: SHA256:8PK4bSMwPSMuUx/IPwPb3Q2p/4RD04yiV98AOF5c+48 root@np0005545273.novalocal
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: The key's randomart image is:
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: +---[ECDSA 256]---+
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |            .    |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |         o . .   |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |      . o + .    |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |       + o = .   |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |   . o. S =.+ .  |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |    O == +o+ o o |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |   o Xo=ooooo E .|
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |  o o B+= .o.    |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |   o  .=.o...    |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: +----[SHA256]-----+
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: The key fingerprint is:
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: SHA256:Kw1KxzD3CXgyiFiocLbwAs57iba4815XYGhFDz1z1k8 root@np0005545273.novalocal
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: The key's randomart image is:
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: +--[ED25519 256]--+
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: | ..  .+.   .     |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |*oo. + o+ o . E  |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |O=..O * .=   o   |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |ooo. X + .    .  |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: | .o o + S        |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: | + + o + .       |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |o o o o o        |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |o. . . .         |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: |.=o              |
Dec 04 09:35:00 np0005545273.novalocal cloud-init[924]: +----[SHA256]-----+
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Reached target Network is Online.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Starting System Logging Service...
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 04 09:35:00 np0005545273.novalocal sm-notify[1006]: Version 2.5.4 starting
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Starting Permit User Sessions...
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Finished Permit User Sessions.
Dec 04 09:35:00 np0005545273.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Dec 04 09:35:00 np0005545273.novalocal sshd[1008]: Server listening on :: port 22.
Dec 04 09:35:00 np0005545273.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Dec 04 09:35:00 np0005545273.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Started Command Scheduler.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Started Getty on tty1.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 04 09:35:00 np0005545273.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Dec 04 09:35:00 np0005545273.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Reached target Login Prompts.
Dec 04 09:35:00 np0005545273.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 96% if used.)
Dec 04 09:35:00 np0005545273.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Started System Logging Service.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Reached target Multi-User System.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 04 09:35:00 np0005545273.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 04 09:35:00 np0005545273.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 09:35:00 np0005545273.novalocal kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Dec 04 09:35:00 np0005545273.novalocal kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1146]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 04 Dec 2025 09:35:00 +0000. Up 10.63 seconds.
Dec 04 09:35:01 np0005545273.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 04 09:35:01 np0005545273.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 04 09:35:01 np0005545273.novalocal dracut[1267]: dracut-057-102.git20250818.el9
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1285]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 04 Dec 2025 09:35:01 +0000. Up 11.07 seconds.
Dec 04 09:35:01 np0005545273.novalocal sshd-session[1288]: Connection reset by 38.102.83.114 port 58246 [preauth]
Dec 04 09:35:01 np0005545273.novalocal dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec 04 09:35:01 np0005545273.novalocal sshd-session[1297]: Unable to negotiate with 38.102.83.114 port 58260: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 04 09:35:01 np0005545273.novalocal sshd-session[1308]: Connection reset by 38.102.83.114 port 58266 [preauth]
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1315]: #############################################################
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1318]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 04 09:35:01 np0005545273.novalocal sshd-session[1316]: Unable to negotiate with 38.102.83.114 port 58282: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1328]: 256 SHA256:8PK4bSMwPSMuUx/IPwPb3Q2p/4RD04yiV98AOF5c+48 root@np0005545273.novalocal (ECDSA)
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1339]: 256 SHA256:Kw1KxzD3CXgyiFiocLbwAs57iba4815XYGhFDz1z1k8 root@np0005545273.novalocal (ED25519)
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1349]: 3072 SHA256:SOKl5YOFJg3y4xP1gD+6MIF1nzlXlosXII9rK9Wj6X4 root@np0005545273.novalocal (RSA)
Dec 04 09:35:01 np0005545273.novalocal sshd-session[1337]: Unable to negotiate with 38.102.83.114 port 58298: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1351]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1356]: #############################################################
Dec 04 09:35:01 np0005545273.novalocal sshd-session[1354]: Connection reset by 38.102.83.114 port 58300 [preauth]
Dec 04 09:35:01 np0005545273.novalocal sshd-session[1364]: Connection reset by 38.102.83.114 port 58306 [preauth]
Dec 04 09:35:01 np0005545273.novalocal cloud-init[1285]: Cloud-init v. 24.4-7.el9 finished at Thu, 04 Dec 2025 09:35:01 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.28 seconds
Dec 04 09:35:01 np0005545273.novalocal sshd-session[1366]: Unable to negotiate with 38.102.83.114 port 58308: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 04 09:35:01 np0005545273.novalocal sshd-session[1371]: Unable to negotiate with 38.102.83.114 port 58324: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 04 09:35:01 np0005545273.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 04 09:35:01 np0005545273.novalocal systemd[1]: Reached target Cloud-init target.
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal chronyd[791]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Dec 04 09:35:02 np0005545273.novalocal chronyd[791]: System clock TAI offset set to 37 seconds
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: memstrack is not available
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: memstrack is not available
Dec 04 09:35:02 np0005545273.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 04 09:35:03 np0005545273.novalocal dracut[1269]: *** Including module: systemd ***
Dec 04 09:35:03 np0005545273.novalocal dracut[1269]: *** Including module: fips ***
Dec 04 09:35:03 np0005545273.novalocal dracut[1269]: *** Including module: systemd-initrd ***
Dec 04 09:35:03 np0005545273.novalocal dracut[1269]: *** Including module: i18n ***
Dec 04 09:35:03 np0005545273.novalocal dracut[1269]: *** Including module: drm ***
Dec 04 09:35:04 np0005545273.novalocal dracut[1269]: *** Including module: prefixdevname ***
Dec 04 09:35:04 np0005545273.novalocal dracut[1269]: *** Including module: kernel-modules ***
Dec 04 09:35:04 np0005545273.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]: *** Including module: kernel-modules-extra ***
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]: *** Including module: qemu ***
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]: *** Including module: fstab-sys ***
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]: *** Including module: rootfs-block ***
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]: *** Including module: terminfo ***
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]: *** Including module: udev-rules ***
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]: Skipping udev rule: 91-permissions.rules
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 04 09:35:05 np0005545273.novalocal dracut[1269]: *** Including module: virtiofs ***
Dec 04 09:35:06 np0005545273.novalocal dracut[1269]: *** Including module: dracut-systemd ***
Dec 04 09:35:06 np0005545273.novalocal dracut[1269]: *** Including module: usrmount ***
Dec 04 09:35:06 np0005545273.novalocal dracut[1269]: *** Including module: base ***
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: IRQ 25 affinity is now unmanaged
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: IRQ 31 affinity is now unmanaged
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: IRQ 28 affinity is now unmanaged
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: IRQ 32 affinity is now unmanaged
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: IRQ 30 affinity is now unmanaged
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 04 09:35:06 np0005545273.novalocal irqbalance[793]: IRQ 29 affinity is now unmanaged
Dec 04 09:35:06 np0005545273.novalocal dracut[1269]: *** Including module: fs-lib ***
Dec 04 09:35:06 np0005545273.novalocal dracut[1269]: *** Including module: kdumpbase ***
Dec 04 09:35:06 np0005545273.novalocal dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 04 09:35:06 np0005545273.novalocal dracut[1269]:   microcode_ctl module: mangling fw_dir
Dec 04 09:35:06 np0005545273.novalocal dracut[1269]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 04 09:35:06 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 04 09:35:07 np0005545273.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]: *** Including module: openssl ***
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]: *** Including module: shutdown ***
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]: *** Including module: squash ***
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]: *** Including modules done ***
Dec 04 09:35:07 np0005545273.novalocal dracut[1269]: *** Installing kernel module dependencies ***
Dec 04 09:35:08 np0005545273.novalocal dracut[1269]: *** Installing kernel module dependencies done ***
Dec 04 09:35:08 np0005545273.novalocal dracut[1269]: *** Resolving executable dependencies ***
Dec 04 09:35:10 np0005545273.novalocal dracut[1269]: *** Resolving executable dependencies done ***
Dec 04 09:35:10 np0005545273.novalocal dracut[1269]: *** Generating early-microcode cpio image ***
Dec 04 09:35:10 np0005545273.novalocal dracut[1269]: *** Store current command line parameters ***
Dec 04 09:35:10 np0005545273.novalocal dracut[1269]: Stored kernel commandline:
Dec 04 09:35:10 np0005545273.novalocal dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Dec 04 09:35:10 np0005545273.novalocal dracut[1269]: *** Install squash loader ***
Dec 04 09:35:11 np0005545273.novalocal dracut[1269]: *** Squashing the files inside the initramfs ***
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: *** Squashing the files inside the initramfs done ***
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: *** Hardlinking files ***
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: Mode:           real
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: Files:          50
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: Linked:         0 files
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: Compared:       0 xattrs
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: Compared:       0 files
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: Saved:          0 B
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: Duration:       0.001115 seconds
Dec 04 09:35:12 np0005545273.novalocal dracut[1269]: *** Hardlinking files done ***
Dec 04 09:35:13 np0005545273.novalocal dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec 04 09:35:13 np0005545273.novalocal kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Dec 04 09:35:13 np0005545273.novalocal kdumpctl[1014]: kdump: Starting kdump: [OK]
Dec 04 09:35:13 np0005545273.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 04 09:35:13 np0005545273.novalocal systemd[1]: Startup finished in 1.545s (kernel) + 2.804s (initrd) + 19.085s (userspace) = 23.435s.
Dec 04 09:35:26 np0005545273.novalocal sshd-session[4296]: Accepted publickey for zuul from 38.102.83.114 port 40382 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 04 09:35:26 np0005545273.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 04 09:35:26 np0005545273.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 04 09:35:26 np0005545273.novalocal systemd-logind[798]: New session 1 of user zuul.
Dec 04 09:35:26 np0005545273.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 04 09:35:26 np0005545273.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Queued start job for default target Main User Target.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Created slice User Application Slice.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Reached target Paths.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Reached target Timers.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Starting D-Bus User Message Bus Socket...
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Starting Create User's Volatile Files and Directories...
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Finished Create User's Volatile Files and Directories.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Listening on D-Bus User Message Bus Socket.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Reached target Sockets.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Reached target Basic System.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Reached target Main User Target.
Dec 04 09:35:26 np0005545273.novalocal systemd[4300]: Startup finished in 144ms.
Dec 04 09:35:26 np0005545273.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 04 09:35:26 np0005545273.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 04 09:35:26 np0005545273.novalocal sshd-session[4296]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 09:35:27 np0005545273.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 04 09:35:27 np0005545273.novalocal python3[4382]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 09:35:29 np0005545273.novalocal python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 09:35:35 np0005545273.novalocal python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 09:35:36 np0005545273.novalocal python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 04 09:35:38 np0005545273.novalocal python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUqQ+zl6uP5KOngryJfCkwhsXDB3oKN/oaspiL29U/2htEnlgClVIUqWUFROF9cojHZrJS7yBFbep+K7ia1Dx6zoAwADAOWyndh0dCkGDk9PTh2TgGHSQ+BDm3L+v+bpMHl7fZDiUdLCZLuouKBKSqV1nOImjFhsiHQaiUcQYKlxCVEaG5PbbYj0kOFUYLN6FjLRLs/8sCfmdl0sBkaM1E+Dj41CnuhXDYr6n/CzIdZAArx0j5DLsaOpDRSZdS6Y04CWdMye4E3mL4kCMwB1WxEL4vtopwfrXpAVDbn4E1Nh9WO27G6m3IWcnjGdzl0T4Pxvp1nE4ocR3R9/TnobaQoLbqzDn1HHMMpWfg5WePf/GrAWUir8gFZpHb6Fuw4nTgL+wZs2wViNFZ+4aEEwsXrhmRVHmFsr4XGALR+VaJjLh30YeRgdX1iy+3t2vEwnUef2eo+0KrVrAYMEJGiQecTsjVe7nW7c6JwoRy+eTI0qY6LVA7Dbgmwj7EhlUaPoE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:38 np0005545273.novalocal python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:39 np0005545273.novalocal python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:35:39 np0005545273.novalocal python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764840938.6559348-207-228925058330639/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=97d96a2127d94d00a8de10b9a25007d0_id_rsa follow=False checksum=4a2583d826b5c5c32fdb603a217b55fd5664c5ca backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:40 np0005545273.novalocal python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:35:40 np0005545273.novalocal python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764840939.6373396-240-22004543482279/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=97d96a2127d94d00a8de10b9a25007d0_id_rsa.pub follow=False checksum=0ac96abdd642eb78b0b0bdefaa890f144fcc6145 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:41 np0005545273.novalocal python3[4973]: ansible-ping Invoked with data=pong
Dec 04 09:35:42 np0005545273.novalocal python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 09:35:44 np0005545273.novalocal python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 04 09:35:45 np0005545273.novalocal python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:45 np0005545273.novalocal python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:45 np0005545273.novalocal python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:46 np0005545273.novalocal python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:46 np0005545273.novalocal python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:46 np0005545273.novalocal python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:47 np0005545273.novalocal sudo[5231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxmiqpxvpecyzdnbuqymruzgpehmzdwb ; /usr/bin/python3'
Dec 04 09:35:47 np0005545273.novalocal sudo[5231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:35:48 np0005545273.novalocal python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:48 np0005545273.novalocal sudo[5231]: pam_unix(sudo:session): session closed for user root
Dec 04 09:35:48 np0005545273.novalocal sudo[5309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsgmmorloadksvsbcyklskzmhgowkbfg ; /usr/bin/python3'
Dec 04 09:35:48 np0005545273.novalocal sudo[5309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:35:48 np0005545273.novalocal python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:35:48 np0005545273.novalocal sudo[5309]: pam_unix(sudo:session): session closed for user root
Dec 04 09:35:49 np0005545273.novalocal sudo[5382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilindoikywlgpgpqdixcvevypuuxemal ; /usr/bin/python3'
Dec 04 09:35:49 np0005545273.novalocal sudo[5382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:35:49 np0005545273.novalocal python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764840948.3267643-21-244587707564640/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:35:49 np0005545273.novalocal sudo[5382]: pam_unix(sudo:session): session closed for user root
Dec 04 09:35:49 np0005545273.novalocal python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:50 np0005545273.novalocal python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:50 np0005545273.novalocal python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:50 np0005545273.novalocal python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:50 np0005545273.novalocal python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:51 np0005545273.novalocal python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:51 np0005545273.novalocal python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:51 np0005545273.novalocal python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:52 np0005545273.novalocal python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:52 np0005545273.novalocal python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:52 np0005545273.novalocal python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:52 np0005545273.novalocal python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:53 np0005545273.novalocal python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:53 np0005545273.novalocal python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:53 np0005545273.novalocal python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:54 np0005545273.novalocal python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:54 np0005545273.novalocal python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:54 np0005545273.novalocal python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:55 np0005545273.novalocal python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:55 np0005545273.novalocal python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:55 np0005545273.novalocal python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:55 np0005545273.novalocal python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:56 np0005545273.novalocal python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:56 np0005545273.novalocal python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:56 np0005545273.novalocal python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:57 np0005545273.novalocal python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:35:59 np0005545273.novalocal sudo[6056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jatzdwpwgivbfpphlklcvapumrxofasz ; /usr/bin/python3'
Dec 04 09:35:59 np0005545273.novalocal sudo[6056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:35:59 np0005545273.novalocal python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 04 09:35:59 np0005545273.novalocal systemd[1]: Starting Time & Date Service...
Dec 04 09:35:59 np0005545273.novalocal systemd[1]: Started Time & Date Service.
Dec 04 09:35:59 np0005545273.novalocal systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Dec 04 09:35:59 np0005545273.novalocal sudo[6056]: pam_unix(sudo:session): session closed for user root
Dec 04 09:36:00 np0005545273.novalocal sudo[6087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbioumtkwbkzvusiifnhyawutswojzeb ; /usr/bin/python3'
Dec 04 09:36:00 np0005545273.novalocal sudo[6087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:36:00 np0005545273.novalocal python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:36:00 np0005545273.novalocal sudo[6087]: pam_unix(sudo:session): session closed for user root
Dec 04 09:36:00 np0005545273.novalocal python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:36:01 np0005545273.novalocal python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764840960.5124042-153-273856972897201/source _original_basename=tmpn3uswt3_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:36:01 np0005545273.novalocal python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:36:02 np0005545273.novalocal python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764840961.385366-183-257220758849323/source _original_basename=tmpvqw8h3x7 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:36:02 np0005545273.novalocal sudo[6507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzsyzzuhncjglpnlqyatrajsmtysrrxl ; /usr/bin/python3'
Dec 04 09:36:02 np0005545273.novalocal sudo[6507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:36:02 np0005545273.novalocal python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:36:02 np0005545273.novalocal sudo[6507]: pam_unix(sudo:session): session closed for user root
Dec 04 09:36:03 np0005545273.novalocal sudo[6580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foixzzxcoywerlbvxbgetqrbmvqemsrl ; /usr/bin/python3'
Dec 04 09:36:03 np0005545273.novalocal sudo[6580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:36:03 np0005545273.novalocal python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764840962.5349348-231-272022659480525/source _original_basename=tmpcyv732f2 follow=False checksum=7a82bff5b5e9039ad1ac15f6a7286925b777bf85 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:36:03 np0005545273.novalocal sudo[6580]: pam_unix(sudo:session): session closed for user root
Dec 04 09:36:03 np0005545273.novalocal python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:36:04 np0005545273.novalocal python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:36:04 np0005545273.novalocal sudo[6734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmtqomxhhkkwnquqgddbkctswkjdkgsk ; /usr/bin/python3'
Dec 04 09:36:04 np0005545273.novalocal sudo[6734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:36:04 np0005545273.novalocal python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:36:04 np0005545273.novalocal sudo[6734]: pam_unix(sudo:session): session closed for user root
Dec 04 09:36:04 np0005545273.novalocal sudo[6807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wavidvmotscmtbttxjdnxclwqfxyvlrt ; /usr/bin/python3'
Dec 04 09:36:04 np0005545273.novalocal sudo[6807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:36:05 np0005545273.novalocal python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764840964.2568264-273-187044394242049/source _original_basename=tmp1nbchae8 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:36:05 np0005545273.novalocal sudo[6807]: pam_unix(sudo:session): session closed for user root
Dec 04 09:36:05 np0005545273.novalocal sudo[6858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthctneaqadnyogaahbuivoxzqrqbrem ; /usr/bin/python3'
Dec 04 09:36:05 np0005545273.novalocal sudo[6858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:36:05 np0005545273.novalocal python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-c10e-9286-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:36:05 np0005545273.novalocal sudo[6858]: pam_unix(sudo:session): session closed for user root
Dec 04 09:36:06 np0005545273.novalocal python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-c10e-9286-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 04 09:36:06 np0005545273.novalocal irqbalance[793]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 04 09:36:06 np0005545273.novalocal irqbalance[793]: IRQ 26 affinity is now unmanaged
Dec 04 09:36:07 np0005545273.novalocal python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:36:08 np0005545273.novalocal chronyd[791]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Dec 04 09:36:24 np0005545273.novalocal sudo[6940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgggidrgjomycttduonntxdvjcodejij ; /usr/bin/python3'
Dec 04 09:36:24 np0005545273.novalocal sudo[6940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:36:24 np0005545273.novalocal python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:36:24 np0005545273.novalocal sudo[6940]: pam_unix(sudo:session): session closed for user root
Dec 04 09:36:29 np0005545273.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 04 09:37:03 np0005545273.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 04 09:37:03 np0005545273.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 04 09:37:03 np0005545273.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 04 09:37:03 np0005545273.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 04 09:37:03 np0005545273.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 04 09:37:03 np0005545273.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 04 09:37:03 np0005545273.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 04 09:37:03 np0005545273.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 04 09:37:03 np0005545273.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 04 09:37:03 np0005545273.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8714] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 04 09:37:03 np0005545273.novalocal systemd-udevd[6946]: Network interface NamePolicy= disabled on kernel command line.
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8899] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8934] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8940] device (eth1): carrier: link connected
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8944] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8953] policy: auto-activating connection 'Wired connection 1' (e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93)
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8959] device (eth1): Activation: starting connection 'Wired connection 1' (e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93)
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8961] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8965] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8970] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 09:37:03 np0005545273.novalocal NetworkManager[860]: <info>  [1764841023.8976] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 04 09:37:05 np0005545273.novalocal python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-0f59-cfe6-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:37:14 np0005545273.novalocal sudo[7050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmezylzxqqjooqnjvvrgjvfwdfnuddnv ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 04 09:37:14 np0005545273.novalocal sudo[7050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:37:15 np0005545273.novalocal python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:37:15 np0005545273.novalocal sudo[7050]: pam_unix(sudo:session): session closed for user root
Dec 04 09:37:15 np0005545273.novalocal sudo[7123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qztsdbcmtblyvqhvujbznunfbonyxstr ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 04 09:37:15 np0005545273.novalocal sudo[7123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:37:15 np0005545273.novalocal python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764841034.8190563-102-173663138953724/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=74e396badec11bd73909255d1e70547a105775dc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:37:15 np0005545273.novalocal sudo[7123]: pam_unix(sudo:session): session closed for user root
Dec 04 09:37:16 np0005545273.novalocal sudo[7173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soijzgpyczosqwseokojiuqpgynnkbgz ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 04 09:37:16 np0005545273.novalocal sudo[7173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:37:16 np0005545273.novalocal python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Stopping Network Manager...
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[860]: <info>  [1764841036.4579] caught SIGTERM, shutting down normally.
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[860]: <info>  [1764841036.4591] dhcp4 (eth0): canceled DHCP transaction
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[860]: <info>  [1764841036.4592] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[860]: <info>  [1764841036.4592] dhcp4 (eth0): state changed no lease
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[860]: <info>  [1764841036.4594] manager: NetworkManager state is now CONNECTING
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[860]: <info>  [1764841036.4691] dhcp4 (eth1): canceled DHCP transaction
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[860]: <info>  [1764841036.4691] dhcp4 (eth1): state changed no lease
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[860]: <info>  [1764841036.4751] exiting (success)
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Stopped Network Manager.
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: NetworkManager.service: Consumed 1.054s CPU time, 10.0M memory peak.
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Starting Network Manager...
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.5277] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:df4fb9d0-81a4-4e5e-8b88-c0920d7ba5e9)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.5284] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.5342] manager[0x562850136070]: monitoring kernel firmware directory '/lib/firmware'.
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Starting Hostname Service...
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Started Hostname Service.
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6535] hostname: hostname: using hostnamed
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6539] hostname: static hostname changed from (none) to "np0005545273.novalocal"
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6546] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6552] manager[0x562850136070]: rfkill: Wi-Fi hardware radio set enabled
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6552] manager[0x562850136070]: rfkill: WWAN hardware radio set enabled
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6582] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6582] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6583] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6583] manager: Networking is enabled by state file
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6586] settings: Loaded settings plugin: keyfile (internal)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6589] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6614] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6624] dhcp: init: Using DHCP client 'internal'
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6626] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6633] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6638] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6646] device (lo): Activation: starting connection 'lo' (3cd632aa-e4f7-4e63-bb4d-c1d9ec185b32)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6652] device (eth0): carrier: link connected
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6657] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6662] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6662] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6667] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6674] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6680] device (eth1): carrier: link connected
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6685] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6689] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93) (indicated)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6689] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6695] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6703] device (eth1): Activation: starting connection 'Wired connection 1' (e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6709] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6713] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Started Network Manager.
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6725] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6728] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6731] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6734] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6736] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6739] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6745] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6752] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6755] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6765] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6768] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6788] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6790] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6796] device (lo): Activation: successful, device activated.
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6805] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6814] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 04 09:37:16 np0005545273.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6898] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6919] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6922] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6926] manager: NetworkManager state is now CONNECTED_SITE
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6936] device (eth0): Activation: successful, device activated.
Dec 04 09:37:16 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841036.6942] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 04 09:37:16 np0005545273.novalocal sudo[7173]: pam_unix(sudo:session): session closed for user root
Dec 04 09:37:17 np0005545273.novalocal python3[7259]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-0f59-cfe6-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:37:20 np0005545273.novalocal sshd-session[7262]: Invalid user debian from 66.45.144.201 port 51978
Dec 04 09:37:20 np0005545273.novalocal sshd-session[7262]: Connection closed by invalid user debian 66.45.144.201 port 51978 [preauth]
Dec 04 09:37:26 np0005545273.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 04 09:37:41 np0005545273.novalocal systemd[4300]: Starting Mark boot as successful...
Dec 04 09:37:41 np0005545273.novalocal systemd[4300]: Finished Mark boot as successful.
Dec 04 09:37:46 np0005545273.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.3565] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 04 09:38:02 np0005545273.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 04 09:38:02 np0005545273.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.3898] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.3902] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.3914] device (eth1): Activation: successful, device activated.
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.3924] manager: startup complete
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.3928] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <warn>  [1764841082.3937] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.3948] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 04 09:38:02 np0005545273.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4021] dhcp4 (eth1): canceled DHCP transaction
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4022] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4022] dhcp4 (eth1): state changed no lease
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4043] policy: auto-activating connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4052] device (eth1): Activation: starting connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4054] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4057] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4068] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4082] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4142] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4144] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 09:38:02 np0005545273.novalocal NetworkManager[7184]: <info>  [1764841082.4155] device (eth1): Activation: successful, device activated.
Dec 04 09:38:12 np0005545273.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 04 09:38:17 np0005545273.novalocal sshd-session[4309]: Received disconnect from 38.102.83.114 port 40382:11: disconnected by user
Dec 04 09:38:17 np0005545273.novalocal sshd-session[4309]: Disconnected from user zuul 38.102.83.114 port 40382
Dec 04 09:38:17 np0005545273.novalocal sshd-session[4296]: pam_unix(sshd:session): session closed for user zuul
Dec 04 09:38:17 np0005545273.novalocal systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Dec 04 09:38:22 np0005545273.novalocal sshd-session[7290]: Accepted publickey for zuul from 38.102.83.114 port 42476 ssh2: RSA SHA256:jo727a/7C1xTjXvQrJpywhDS5FmMK+1r+hTQ2rn/09o
Dec 04 09:38:22 np0005545273.novalocal systemd-logind[798]: New session 3 of user zuul.
Dec 04 09:38:22 np0005545273.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 04 09:38:22 np0005545273.novalocal sshd-session[7290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 09:38:22 np0005545273.novalocal sudo[7369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydlnevuuzpjwsxmhrovztxybivsztctx ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 04 09:38:22 np0005545273.novalocal sudo[7369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:38:22 np0005545273.novalocal python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:38:22 np0005545273.novalocal sudo[7369]: pam_unix(sudo:session): session closed for user root
Dec 04 09:38:22 np0005545273.novalocal sudo[7442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjqdzjnfrxctaxtjhsjezslkyzezgwhb ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 04 09:38:22 np0005545273.novalocal sudo[7442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:38:22 np0005545273.novalocal python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764841102.1596544-267-162519117284172/source _original_basename=tmpip6e7upn follow=False checksum=ff6fb6bb40e9eca3d2188a5a673f0d4ae4acf72d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:38:22 np0005545273.novalocal sudo[7442]: pam_unix(sudo:session): session closed for user root
Dec 04 09:38:25 np0005545273.novalocal sshd-session[7293]: Connection closed by 38.102.83.114 port 42476
Dec 04 09:38:25 np0005545273.novalocal sshd-session[7290]: pam_unix(sshd:session): session closed for user zuul
Dec 04 09:38:25 np0005545273.novalocal systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Dec 04 09:38:25 np0005545273.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 04 09:38:25 np0005545273.novalocal systemd-logind[798]: Removed session 3.
Dec 04 09:38:32 np0005545273.novalocal sshd-session[7469]: Connection closed by 60.172.41.103 port 51455
Dec 04 09:40:41 np0005545273.novalocal systemd[4300]: Created slice User Background Tasks Slice.
Dec 04 09:40:41 np0005545273.novalocal systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Dec 04 09:40:41 np0005545273.novalocal systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Dec 04 09:45:18 np0005545273.novalocal sshd-session[7475]: Invalid user admin from 80.250.155.76 port 56354
Dec 04 09:45:18 np0005545273.novalocal sshd-session[7475]: Connection closed by invalid user admin 80.250.155.76 port 56354 [preauth]
Dec 04 09:45:57 np0005545273.novalocal sshd-session[7480]: Accepted publickey for zuul from 38.102.83.114 port 40950 ssh2: RSA SHA256:jo727a/7C1xTjXvQrJpywhDS5FmMK+1r+hTQ2rn/09o
Dec 04 09:45:57 np0005545273.novalocal systemd-logind[798]: New session 4 of user zuul.
Dec 04 09:45:57 np0005545273.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 04 09:45:57 np0005545273.novalocal sshd-session[7480]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 09:45:57 np0005545273.novalocal sudo[7507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpgtgdgftrwdgigvjwqeymanztynynk ; /usr/bin/python3'
Dec 04 09:45:57 np0005545273.novalocal sudo[7507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:45:57 np0005545273.novalocal python3[7509]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-498b-906a-000000001cda-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:45:57 np0005545273.novalocal sudo[7507]: pam_unix(sudo:session): session closed for user root
Dec 04 09:45:57 np0005545273.novalocal sudo[7535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvqjserfejqorwwixnphcelixkoahrht ; /usr/bin/python3'
Dec 04 09:45:57 np0005545273.novalocal sudo[7535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:45:57 np0005545273.novalocal python3[7537]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:45:57 np0005545273.novalocal sudo[7535]: pam_unix(sudo:session): session closed for user root
Dec 04 09:45:58 np0005545273.novalocal sudo[7561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlzldetgmhtmobnniyevjyulkopoejzd ; /usr/bin/python3'
Dec 04 09:45:58 np0005545273.novalocal sudo[7561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:45:58 np0005545273.novalocal python3[7564]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:45:58 np0005545273.novalocal sudo[7561]: pam_unix(sudo:session): session closed for user root
Dec 04 09:45:58 np0005545273.novalocal sudo[7588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmiyvzktdeqjafmfykwgpbfnhelmrxac ; /usr/bin/python3'
Dec 04 09:45:58 np0005545273.novalocal sudo[7588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:45:58 np0005545273.novalocal python3[7590]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:45:58 np0005545273.novalocal sudo[7588]: pam_unix(sudo:session): session closed for user root
Dec 04 09:45:58 np0005545273.novalocal sudo[7614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcutqbocjvgrbdksqhpjydkxokqxuuxx ; /usr/bin/python3'
Dec 04 09:45:58 np0005545273.novalocal sudo[7614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:45:58 np0005545273.novalocal python3[7616]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:45:58 np0005545273.novalocal sudo[7614]: pam_unix(sudo:session): session closed for user root
Dec 04 09:45:59 np0005545273.novalocal sudo[7640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhfazbzeskiacdvcivmqnxqtuuowevit ; /usr/bin/python3'
Dec 04 09:45:59 np0005545273.novalocal sudo[7640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:45:59 np0005545273.novalocal python3[7642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:45:59 np0005545273.novalocal sudo[7640]: pam_unix(sudo:session): session closed for user root
Dec 04 09:45:59 np0005545273.novalocal sudo[7718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbwlszfemovvpjlenkqaazlvwmiwkkhb ; /usr/bin/python3'
Dec 04 09:45:59 np0005545273.novalocal sudo[7718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:45:59 np0005545273.novalocal python3[7720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:45:59 np0005545273.novalocal sudo[7718]: pam_unix(sudo:session): session closed for user root
Dec 04 09:46:00 np0005545273.novalocal sudo[7791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmlteamvzkwrotezosfhmzmipcjslqin ; /usr/bin/python3'
Dec 04 09:46:00 np0005545273.novalocal sudo[7791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:46:01 np0005545273.novalocal python3[7793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764841559.552521-479-178493040891327/source _original_basename=tmpdf5infue follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:46:01 np0005545273.novalocal sudo[7791]: pam_unix(sudo:session): session closed for user root
Dec 04 09:46:01 np0005545273.novalocal sudo[7841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyhldfojaibytqkcqgdgjjpgltefhbxa ; /usr/bin/python3'
Dec 04 09:46:01 np0005545273.novalocal sudo[7841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:46:01 np0005545273.novalocal python3[7843]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 04 09:46:02 np0005545273.novalocal systemd[1]: Reloading.
Dec 04 09:46:02 np0005545273.novalocal systemd-rc-local-generator[7861]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 09:46:02 np0005545273.novalocal sudo[7841]: pam_unix(sudo:session): session closed for user root
Dec 04 09:46:03 np0005545273.novalocal sudo[7896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lseymznikniuszpcmwinjweooidslmjn ; /usr/bin/python3'
Dec 04 09:46:03 np0005545273.novalocal sudo[7896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:46:03 np0005545273.novalocal python3[7898]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 04 09:46:03 np0005545273.novalocal sudo[7896]: pam_unix(sudo:session): session closed for user root
Dec 04 09:46:04 np0005545273.novalocal sudo[7922]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkpohzfiskewotvfpnopolmmnfvrhawg ; /usr/bin/python3'
Dec 04 09:46:04 np0005545273.novalocal sudo[7922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:46:04 np0005545273.novalocal python3[7924]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:46:04 np0005545273.novalocal sudo[7922]: pam_unix(sudo:session): session closed for user root
Dec 04 09:46:04 np0005545273.novalocal sudo[7950]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uluzpbyykeekqoaedcdpxpwwpjhtxlbl ; /usr/bin/python3'
Dec 04 09:46:04 np0005545273.novalocal sudo[7950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:46:04 np0005545273.novalocal python3[7952]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:46:04 np0005545273.novalocal sudo[7950]: pam_unix(sudo:session): session closed for user root
Dec 04 09:46:04 np0005545273.novalocal sudo[7978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtlpcaokpxzeefqxuihjudsiafhjigvk ; /usr/bin/python3'
Dec 04 09:46:04 np0005545273.novalocal sudo[7978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:46:04 np0005545273.novalocal python3[7980]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:46:04 np0005545273.novalocal sudo[7978]: pam_unix(sudo:session): session closed for user root
Dec 04 09:46:04 np0005545273.novalocal sudo[8006]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdcvsczazhylmkeansijzmnwmiabflgf ; /usr/bin/python3'
Dec 04 09:46:04 np0005545273.novalocal sudo[8006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:46:05 np0005545273.novalocal python3[8008]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:46:05 np0005545273.novalocal sudo[8006]: pam_unix(sudo:session): session closed for user root
Dec 04 09:46:05 np0005545273.novalocal python3[8035]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-498b-906a-000000001ce1-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:46:06 np0005545273.novalocal python3[8065]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 04 09:46:07 np0005545273.novalocal sshd-session[7483]: Connection closed by 38.102.83.114 port 40950
Dec 04 09:46:07 np0005545273.novalocal sshd-session[7480]: pam_unix(sshd:session): session closed for user zuul
Dec 04 09:46:07 np0005545273.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 04 09:46:07 np0005545273.novalocal systemd[1]: session-4.scope: Consumed 4.746s CPU time.
Dec 04 09:46:07 np0005545273.novalocal systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Dec 04 09:46:07 np0005545273.novalocal systemd-logind[798]: Removed session 4.
Dec 04 09:46:09 np0005545273.novalocal sshd-session[8070]: Accepted publickey for zuul from 38.102.83.114 port 46116 ssh2: RSA SHA256:jo727a/7C1xTjXvQrJpywhDS5FmMK+1r+hTQ2rn/09o
Dec 04 09:46:09 np0005545273.novalocal systemd-logind[798]: New session 5 of user zuul.
Dec 04 09:46:09 np0005545273.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 04 09:46:09 np0005545273.novalocal sshd-session[8070]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 09:46:09 np0005545273.novalocal sudo[8097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiizhczsxhsnsdmtnrofasifickxqall ; /usr/bin/python3'
Dec 04 09:46:09 np0005545273.novalocal sudo[8097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:46:09 np0005545273.novalocal python3[8099]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 04 09:46:25 np0005545273.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 04 09:46:25 np0005545273.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 09:46:25 np0005545273.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 04 09:46:25 np0005545273.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 09:46:25 np0005545273.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 04 09:46:25 np0005545273.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 09:46:25 np0005545273.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 09:46:25 np0005545273.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 09:46:34 np0005545273.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 04 09:46:34 np0005545273.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 09:46:34 np0005545273.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 04 09:46:34 np0005545273.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 09:46:34 np0005545273.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 04 09:46:34 np0005545273.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 09:46:34 np0005545273.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 09:46:34 np0005545273.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 09:46:43 np0005545273.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 04 09:46:43 np0005545273.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 09:46:43 np0005545273.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 04 09:46:43 np0005545273.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 09:46:43 np0005545273.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 04 09:46:43 np0005545273.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 09:46:43 np0005545273.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 09:46:43 np0005545273.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 09:46:44 np0005545273.novalocal setsebool[8165]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 04 09:46:44 np0005545273.novalocal setsebool[8165]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 04 09:46:55 np0005545273.novalocal kernel: SELinux:  Converting 388 SID table entries...
Dec 04 09:46:55 np0005545273.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 09:46:55 np0005545273.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 04 09:46:55 np0005545273.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 09:46:55 np0005545273.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 04 09:46:55 np0005545273.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 09:46:55 np0005545273.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 09:46:55 np0005545273.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 09:47:12 np0005545273.novalocal dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 04 09:47:12 np0005545273.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 04 09:47:12 np0005545273.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 04 09:47:12 np0005545273.novalocal systemd[1]: Reloading.
Dec 04 09:47:13 np0005545273.novalocal systemd-rc-local-generator[8922]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 09:47:13 np0005545273.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 04 09:47:15 np0005545273.novalocal sudo[8097]: pam_unix(sudo:session): session closed for user root
Dec 04 09:47:32 np0005545273.novalocal python3[17676]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-7a6d-a5d1-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:47:33 np0005545273.novalocal kernel: evm: overlay not supported
Dec 04 09:47:33 np0005545273.novalocal systemd[4300]: Starting D-Bus User Message Bus...
Dec 04 09:47:33 np0005545273.novalocal dbus-broker-launch[18074]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 04 09:47:33 np0005545273.novalocal dbus-broker-launch[18074]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 04 09:47:33 np0005545273.novalocal systemd[4300]: Started D-Bus User Message Bus.
Dec 04 09:47:33 np0005545273.novalocal dbus-broker-lau[18074]: Ready
Dec 04 09:47:33 np0005545273.novalocal systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 04 09:47:33 np0005545273.novalocal systemd[4300]: Created slice Slice /user.
Dec 04 09:47:33 np0005545273.novalocal systemd[4300]: podman-18011.scope: unit configures an IP firewall, but not running as root.
Dec 04 09:47:33 np0005545273.novalocal systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Dec 04 09:47:33 np0005545273.novalocal systemd[4300]: Started podman-18011.scope.
Dec 04 09:47:33 np0005545273.novalocal systemd[4300]: Started podman-pause-337037d6.scope.
Dec 04 09:47:34 np0005545273.novalocal sudo[18337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urfvmrtcmzgrqwkepykpkeaiybhpktec ; /usr/bin/python3'
Dec 04 09:47:34 np0005545273.novalocal sudo[18337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:47:34 np0005545273.novalocal python3[18346]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.73:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.73:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:47:34 np0005545273.novalocal python3[18346]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 04 09:47:34 np0005545273.novalocal sudo[18337]: pam_unix(sudo:session): session closed for user root
Dec 04 09:47:34 np0005545273.novalocal sshd-session[8073]: Connection closed by 38.102.83.114 port 46116
Dec 04 09:47:34 np0005545273.novalocal sshd-session[8070]: pam_unix(sshd:session): session closed for user zuul
Dec 04 09:47:34 np0005545273.novalocal systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Dec 04 09:47:34 np0005545273.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 04 09:47:34 np0005545273.novalocal systemd[1]: session-5.scope: Consumed 59.194s CPU time.
Dec 04 09:47:34 np0005545273.novalocal systemd-logind[798]: Removed session 5.
Dec 04 09:47:46 np0005545273.novalocal irqbalance[793]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 04 09:47:46 np0005545273.novalocal irqbalance[793]: IRQ 27 affinity is now unmanaged
Dec 04 09:47:54 np0005545273.novalocal sshd-session[24833]: Connection closed by 38.102.83.189 port 57248 [preauth]
Dec 04 09:47:55 np0005545273.novalocal sshd-session[24836]: Connection closed by 38.102.83.189 port 57242 [preauth]
Dec 04 09:47:55 np0005545273.novalocal sshd-session[24835]: Unable to negotiate with 38.102.83.189 port 57272: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 04 09:47:55 np0005545273.novalocal sshd-session[24837]: Unable to negotiate with 38.102.83.189 port 57250: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 04 09:47:55 np0005545273.novalocal sshd-session[24839]: Unable to negotiate with 38.102.83.189 port 57266: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 04 09:47:59 np0005545273.novalocal sshd-session[26506]: Accepted publickey for zuul from 38.102.83.114 port 50898 ssh2: RSA SHA256:jo727a/7C1xTjXvQrJpywhDS5FmMK+1r+hTQ2rn/09o
Dec 04 09:47:59 np0005545273.novalocal systemd-logind[798]: New session 6 of user zuul.
Dec 04 09:47:59 np0005545273.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 04 09:47:59 np0005545273.novalocal sshd-session[26506]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 09:48:00 np0005545273.novalocal python3[26613]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDo+SIM7dQ84iyV1xgijokMsOaxlQFhYszhuuPRuvUmZ/3GmJeJAn48BSIn6R3D70IagTKyKdJYxZwXC9nloQBw= zuul@np0005545272.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:48:00 np0005545273.novalocal sudo[26807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpsrxqkgfkrggluyyfrnrbeguqryhesy ; /usr/bin/python3'
Dec 04 09:48:00 np0005545273.novalocal sudo[26807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:48:00 np0005545273.novalocal python3[26821]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDo+SIM7dQ84iyV1xgijokMsOaxlQFhYszhuuPRuvUmZ/3GmJeJAn48BSIn6R3D70IagTKyKdJYxZwXC9nloQBw= zuul@np0005545272.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:48:00 np0005545273.novalocal sudo[26807]: pam_unix(sudo:session): session closed for user root
Dec 04 09:48:01 np0005545273.novalocal sudo[27111]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zphowrndtomezngwdzfsgkhqdldauwoe ; /usr/bin/python3'
Dec 04 09:48:01 np0005545273.novalocal sudo[27111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:48:01 np0005545273.novalocal python3[27120]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005545273.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 04 09:48:01 np0005545273.novalocal useradd[27196]: new group: name=cloud-admin, GID=1002
Dec 04 09:48:01 np0005545273.novalocal useradd[27196]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 04 09:48:01 np0005545273.novalocal sudo[27111]: pam_unix(sudo:session): session closed for user root
Dec 04 09:48:01 np0005545273.novalocal sudo[27336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxdyuzceduleibcstduujmupngsnfyho ; /usr/bin/python3'
Dec 04 09:48:01 np0005545273.novalocal sudo[27336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:48:01 np0005545273.novalocal python3[27343]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDo+SIM7dQ84iyV1xgijokMsOaxlQFhYszhuuPRuvUmZ/3GmJeJAn48BSIn6R3D70IagTKyKdJYxZwXC9nloQBw= zuul@np0005545272.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 04 09:48:01 np0005545273.novalocal sudo[27336]: pam_unix(sudo:session): session closed for user root
Dec 04 09:48:02 np0005545273.novalocal sudo[27640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwktzpgspwnwtqhytxtuqqbtqavoroef ; /usr/bin/python3'
Dec 04 09:48:02 np0005545273.novalocal sudo[27640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:48:02 np0005545273.novalocal python3[27649]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:48:02 np0005545273.novalocal sudo[27640]: pam_unix(sudo:session): session closed for user root
Dec 04 09:48:02 np0005545273.novalocal sudo[27896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdtouhrlyeisiirwmujlbqinysgoxabk ; /usr/bin/python3'
Dec 04 09:48:02 np0005545273.novalocal sudo[27896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:48:02 np0005545273.novalocal python3[27903]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764841682.0918944-135-246479739412575/source _original_basename=tmp30zivo4v follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:48:02 np0005545273.novalocal sudo[27896]: pam_unix(sudo:session): session closed for user root
Dec 04 09:48:03 np0005545273.novalocal sudo[28172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psrxzvzqbxomcqfqybjsyupbdcgcsbep ; /usr/bin/python3'
Dec 04 09:48:03 np0005545273.novalocal sudo[28172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:48:03 np0005545273.novalocal python3[28182]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 04 09:48:03 np0005545273.novalocal systemd[1]: Starting Hostname Service...
Dec 04 09:48:03 np0005545273.novalocal systemd[1]: Started Hostname Service.
Dec 04 09:48:03 np0005545273.novalocal systemd-hostnamed[28291]: Changed pretty hostname to 'compute-0'
Dec 04 09:48:03 compute-0 systemd-hostnamed[28291]: Hostname set to <compute-0> (static)
Dec 04 09:48:03 compute-0 NetworkManager[7184]: <info>  [1764841683.9466] hostname: static hostname changed from "np0005545273.novalocal" to "compute-0"
Dec 04 09:48:03 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 04 09:48:03 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 04 09:48:03 compute-0 sudo[28172]: pam_unix(sudo:session): session closed for user root
Dec 04 09:48:04 compute-0 sshd-session[26554]: Connection closed by 38.102.83.114 port 50898
Dec 04 09:48:04 compute-0 sshd-session[26506]: pam_unix(sshd:session): session closed for user zuul
Dec 04 09:48:04 compute-0 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Dec 04 09:48:04 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 04 09:48:04 compute-0 systemd[1]: session-6.scope: Consumed 2.425s CPU time.
Dec 04 09:48:04 compute-0 systemd-logind[798]: Removed session 6.
Dec 04 09:48:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 04 09:48:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 04 09:48:08 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 6.035s CPU time.
Dec 04 09:48:08 compute-0 systemd[1]: run-r1ae6d83a120f43108486f2c8e19e0c92.service: Deactivated successfully.
Dec 04 09:48:14 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 04 09:48:34 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 04 09:50:31 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 04 09:50:31 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 04 09:50:31 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 04 09:50:31 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 04 09:50:54 compute-0 sshd-session[29926]: Connection reset by authenticating user root 45.140.17.124 port 29676 [preauth]
Dec 04 09:50:56 compute-0 sshd-session[29928]: Connection reset by authenticating user root 45.140.17.124 port 29694 [preauth]
Dec 04 09:50:59 compute-0 sshd-session[29930]: Connection reset by authenticating user root 45.140.17.124 port 29696 [preauth]
Dec 04 09:51:01 compute-0 sshd-session[29932]: Connection reset by authenticating user root 45.140.17.124 port 29698 [preauth]
Dec 04 09:51:03 compute-0 sshd-session[29934]: Invalid user ubuntu from 45.140.17.124 port 26382
Dec 04 09:51:03 compute-0 sshd-session[29934]: Connection reset by invalid user ubuntu 45.140.17.124 port 26382 [preauth]
Dec 04 09:52:30 compute-0 sshd-session[29937]: Accepted publickey for zuul from 38.102.83.189 port 60788 ssh2: RSA SHA256:jo727a/7C1xTjXvQrJpywhDS5FmMK+1r+hTQ2rn/09o
Dec 04 09:52:30 compute-0 systemd-logind[798]: New session 7 of user zuul.
Dec 04 09:52:30 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 04 09:52:30 compute-0 sshd-session[29937]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 09:52:30 compute-0 python3[30013]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 09:52:32 compute-0 sudo[30128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kamvjfqzaeyyxeapwrnvwodtgubuprni ; /usr/bin/python3'
Dec 04 09:52:32 compute-0 sudo[30128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:32 compute-0 python3[30130]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:52:32 compute-0 sudo[30128]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:33 compute-0 sudo[30201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bizfyvjnszdrpafmawkuxeftcqpbdrju ; /usr/bin/python3'
Dec 04 09:52:33 compute-0 sudo[30201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:33 compute-0 python3[30203]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:52:33 compute-0 sudo[30201]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:33 compute-0 sudo[30227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuvuvosceccvseyljtitqwqmopantqfx ; /usr/bin/python3'
Dec 04 09:52:33 compute-0 sudo[30227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:33 compute-0 python3[30229]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:52:33 compute-0 sudo[30227]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:33 compute-0 sudo[30300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmfwjqtwodrqtskibhtrjdlchlmqevua ; /usr/bin/python3'
Dec 04 09:52:33 compute-0 sudo[30300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:33 compute-0 python3[30302]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:52:33 compute-0 sudo[30300]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:34 compute-0 sudo[30326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzwzxhsdilgdnltaiqtcxbpdqpikjbpj ; /usr/bin/python3'
Dec 04 09:52:34 compute-0 sudo[30326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:34 compute-0 python3[30328]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:52:34 compute-0 sudo[30326]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:34 compute-0 sudo[30399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uilbcxtpbkiovnbfknzpfbxziyquaekm ; /usr/bin/python3'
Dec 04 09:52:34 compute-0 sudo[30399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:34 compute-0 python3[30401]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:52:34 compute-0 sudo[30399]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:34 compute-0 sudo[30425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqhzbauenrmjrevhsabltwmqlpkqlsdb ; /usr/bin/python3'
Dec 04 09:52:34 compute-0 sudo[30425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:34 compute-0 python3[30427]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:52:34 compute-0 sudo[30425]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:35 compute-0 sudo[30498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbenmntoyrfbknbudjbtngagjtelndyt ; /usr/bin/python3'
Dec 04 09:52:35 compute-0 sudo[30498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:35 compute-0 python3[30500]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:52:35 compute-0 sudo[30498]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:35 compute-0 sudo[30524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esainuymksinwindafceubshtwnbmfdo ; /usr/bin/python3'
Dec 04 09:52:35 compute-0 sudo[30524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:35 compute-0 python3[30526]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:52:35 compute-0 sudo[30524]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:35 compute-0 sudo[30597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfhejeplrbjyzvxcfzvoztxodfpfzsrg ; /usr/bin/python3'
Dec 04 09:52:35 compute-0 sudo[30597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:35 compute-0 python3[30599]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:52:35 compute-0 sudo[30597]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:36 compute-0 sudo[30623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kailcreszgnjlbgiiskfqvawkgbswnuy ; /usr/bin/python3'
Dec 04 09:52:36 compute-0 sudo[30623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:36 compute-0 python3[30625]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:52:36 compute-0 sudo[30623]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:36 compute-0 sudo[30696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsjduliamesmpuybyzfonqgselvvqoaz ; /usr/bin/python3'
Dec 04 09:52:36 compute-0 sudo[30696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:36 compute-0 python3[30698]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:52:36 compute-0 sudo[30696]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:36 compute-0 sudo[30722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewwarulxgthkpmrxacgqshdcgycymqlk ; /usr/bin/python3'
Dec 04 09:52:36 compute-0 sudo[30722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:36 compute-0 python3[30724]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 09:52:36 compute-0 sudo[30722]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:37 compute-0 sudo[30795]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lewxnzoesaxvgstvbfurtfpgbbjqqvwu ; /usr/bin/python3'
Dec 04 09:52:37 compute-0 sudo[30795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 09:52:37 compute-0 python3[30797]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 09:52:37 compute-0 sudo[30795]: pam_unix(sudo:session): session closed for user root
Dec 04 09:52:39 compute-0 sshd-session[30823]: Unable to negotiate with 192.168.122.11 port 60712: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 04 09:52:39 compute-0 sshd-session[30825]: Connection closed by 192.168.122.11 port 60682 [preauth]
Dec 04 09:52:39 compute-0 sshd-session[30822]: Connection closed by 192.168.122.11 port 60688 [preauth]
Dec 04 09:52:39 compute-0 sshd-session[30824]: Unable to negotiate with 192.168.122.11 port 60700: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 04 09:52:39 compute-0 sshd-session[30827]: Unable to negotiate with 192.168.122.11 port 60722: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 04 09:52:50 compute-0 python3[30855]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 09:53:32 compute-0 sshd-session[30857]: Invalid user blank from 67.81.118.42 port 46686
Dec 04 09:53:32 compute-0 sshd-session[30857]: Connection closed by invalid user blank 67.81.118.42 port 46686 [preauth]
Dec 04 09:53:45 compute-0 sshd-session[30859]: Connection closed by authenticating user mail 37.46.160.175 port 40188 [preauth]
Dec 04 09:55:46 compute-0 sshd-session[30861]: Connection closed by 92.204.189.3 port 55120
Dec 04 09:56:48 compute-0 sshd-session[30863]: Connection reset by authenticating user root 91.202.233.33 port 40960 [preauth]
Dec 04 09:56:50 compute-0 sshd-session[30866]: Invalid user git from 91.202.233.33 port 40972
Dec 04 09:56:50 compute-0 sshd-session[30866]: Connection reset by invalid user git 91.202.233.33 port 40972 [preauth]
Dec 04 09:56:52 compute-0 sshd-session[30868]: Invalid user admin from 91.202.233.33 port 22220
Dec 04 09:56:52 compute-0 sshd-session[30868]: Connection reset by invalid user admin 91.202.233.33 port 22220 [preauth]
Dec 04 09:56:54 compute-0 sshd-session[30870]: Connection reset by authenticating user root 91.202.233.33 port 22236 [preauth]
Dec 04 09:56:57 compute-0 sshd-session[30872]: Connection reset by authenticating user root 91.202.233.33 port 22260 [preauth]
Dec 04 09:57:50 compute-0 sshd-session[29940]: Received disconnect from 38.102.83.189 port 60788:11: disconnected by user
Dec 04 09:57:50 compute-0 sshd-session[29940]: Disconnected from user zuul 38.102.83.189 port 60788
Dec 04 09:57:50 compute-0 sshd-session[29937]: pam_unix(sshd:session): session closed for user zuul
Dec 04 09:57:50 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 04 09:57:50 compute-0 systemd[1]: session-7.scope: Consumed 5.629s CPU time.
Dec 04 09:57:50 compute-0 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Dec 04 09:57:50 compute-0 systemd-logind[798]: Removed session 7.
Dec 04 09:59:50 compute-0 sshd[1008]: Timeout before authentication for connection from 67.201.58.185 to 38.102.83.169, pid = 30874
Dec 04 10:01:01 compute-0 CROND[30877]: (root) CMD (run-parts /etc/cron.hourly)
Dec 04 10:01:01 compute-0 run-parts[30880]: (/etc/cron.hourly) starting 0anacron
Dec 04 10:01:01 compute-0 anacron[30888]: Anacron started on 2025-12-04
Dec 04 10:01:01 compute-0 anacron[30888]: Will run job `cron.daily' in 15 min.
Dec 04 10:01:01 compute-0 anacron[30888]: Will run job `cron.weekly' in 35 min.
Dec 04 10:01:01 compute-0 anacron[30888]: Will run job `cron.monthly' in 55 min.
Dec 04 10:01:01 compute-0 anacron[30888]: Jobs will be executed sequentially
Dec 04 10:01:01 compute-0 run-parts[30890]: (/etc/cron.hourly) finished 0anacron
Dec 04 10:01:01 compute-0 CROND[30876]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 04 10:03:00 compute-0 sshd-session[30897]: Received disconnect from 217.154.62.22 port 45204:11: Bye Bye [preauth]
Dec 04 10:03:00 compute-0 sshd-session[30897]: Disconnected from authenticating user root 217.154.62.22 port 45204 [preauth]
Dec 04 10:03:05 compute-0 sshd-session[30899]: Invalid user kiosk from 103.179.218.243 port 40260
Dec 04 10:03:05 compute-0 sshd-session[30899]: Received disconnect from 103.179.218.243 port 40260:11: Bye Bye [preauth]
Dec 04 10:03:05 compute-0 sshd-session[30899]: Disconnected from invalid user kiosk 103.179.218.243 port 40260 [preauth]
Dec 04 10:03:41 compute-0 systemd[1]: Starting dnf makecache...
Dec 04 10:03:42 compute-0 dnf[30904]: Failed determining last makecache time.
Dec 04 10:03:43 compute-0 dnf[30904]: delorean-openstack-barbican-42b4c41831408a8e323  20 kB/s |  13 kB     00:00
Dec 04 10:03:43 compute-0 dnf[30904]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 219 kB/s |  65 kB     00:00
Dec 04 10:03:43 compute-0 dnf[30904]: delorean-openstack-cinder-1c00d6490d88e436f26ef 827 kB/s |  32 kB     00:00
Dec 04 10:03:44 compute-0 dnf[30904]: delorean-python-stevedore-c4acc5639fd2329372142 401 kB/s | 131 kB     00:00
Dec 04 10:03:44 compute-0 dnf[30904]: delorean-python-cloudkitty-tests-tempest-2c80f8 115 kB/s |  32 kB     00:00
Dec 04 10:03:45 compute-0 dnf[30904]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 264 kB/s | 349 kB     00:01
Dec 04 10:03:47 compute-0 dnf[30904]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  22 kB/s |  42 kB     00:01
Dec 04 10:03:48 compute-0 dnf[30904]: delorean-python-designate-tests-tempest-347fdbc  28 kB/s |  18 kB     00:00
Dec 04 10:03:49 compute-0 dnf[30904]: delorean-openstack-glance-1fd12c29b339f30fe823e  19 kB/s |  18 kB     00:00
Dec 04 10:03:50 compute-0 dnf[30904]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  33 kB/s |  29 kB     00:00
Dec 04 10:03:50 compute-0 dnf[30904]: delorean-openstack-manila-3c01b7181572c95dac462  46 kB/s |  25 kB     00:00
Dec 04 10:03:52 compute-0 dnf[30904]: delorean-python-whitebox-neutron-tests-tempest-  78 kB/s | 154 kB     00:01
Dec 04 10:03:53 compute-0 dnf[30904]: delorean-openstack-octavia-ba397f07a7331190208c 239 kB/s |  26 kB     00:00
Dec 04 10:03:53 compute-0 dnf[30904]: delorean-openstack-watcher-c014f81a8647287f6dcc  24 kB/s |  16 kB     00:00
Dec 04 10:03:54 compute-0 dnf[30904]: delorean-ansible-config_template-5ccaa22121a7ff  25 kB/s | 7.4 kB     00:00
Dec 04 10:03:54 compute-0 dnf[30904]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 1.0 MB/s | 144 kB     00:00
Dec 04 10:03:54 compute-0 dnf[30904]: delorean-openstack-swift-dc98a8463506ac520c469a  77 kB/s |  14 kB     00:00
Dec 04 10:03:55 compute-0 dnf[30904]: delorean-python-tempestconf-8515371b7cceebd4282  66 kB/s |  53 kB     00:00
Dec 04 10:03:55 compute-0 dnf[30904]: delorean-openstack-heat-ui-013accbfd179753bc3f0 1.1 MB/s |  96 kB     00:00
Dec 04 10:03:55 compute-0 dnf[30904]: CentOS Stream 9 - BaseOS                         75 kB/s | 7.0 kB     00:00
Dec 04 10:03:55 compute-0 dnf[30904]: CentOS Stream 9 - AppStream                      68 kB/s | 7.1 kB     00:00
Dec 04 10:03:56 compute-0 dnf[30904]: CentOS Stream 9 - CRB                            29 kB/s | 6.9 kB     00:00
Dec 04 10:03:56 compute-0 dnf[30904]: CentOS Stream 9 - Extras packages                76 kB/s | 8.3 kB     00:00
Dec 04 10:03:56 compute-0 dnf[30904]: dlrn-antelope-testing                           2.7 MB/s | 1.1 MB     00:00
Dec 04 10:03:57 compute-0 dnf[30904]: dlrn-antelope-build-deps                        1.6 MB/s | 461 kB     00:00
Dec 04 10:03:57 compute-0 dnf[30904]: centos9-rabbitmq                                1.3 MB/s | 123 kB     00:00
Dec 04 10:03:57 compute-0 sshd-session[30964]: Invalid user radarr from 101.47.163.20 port 60168
Dec 04 10:03:57 compute-0 dnf[30904]: centos9-storage                                 1.4 MB/s | 415 kB     00:00
Dec 04 10:03:58 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.114.2 to 38.102.83.169, pid = 30892
Dec 04 10:03:58 compute-0 sshd-session[30964]: Received disconnect from 101.47.163.20 port 60168:11: Bye Bye [preauth]
Dec 04 10:03:58 compute-0 sshd-session[30964]: Disconnected from invalid user radarr 101.47.163.20 port 60168 [preauth]
Dec 04 10:03:58 compute-0 dnf[30904]: centos9-opstools                                124 kB/s |  51 kB     00:00
Dec 04 10:03:59 compute-0 dnf[30904]: NFV SIG OpenvSwitch                             438 kB/s | 456 kB     00:01
Dec 04 10:04:01 compute-0 dnf[30904]: repo-setup-centos-appstream                      13 MB/s |  25 MB     00:02
Dec 04 10:04:02 compute-0 sshd-session[30990]: Invalid user g from 103.149.86.230 port 42044
Dec 04 10:04:02 compute-0 sshd-session[30990]: Received disconnect from 103.149.86.230 port 42044:11: Bye Bye [preauth]
Dec 04 10:04:02 compute-0 sshd-session[30990]: Disconnected from invalid user g 103.149.86.230 port 42044 [preauth]
Dec 04 10:04:08 compute-0 dnf[30904]: repo-setup-centos-baseos                         16 MB/s | 8.8 MB     00:00
Dec 04 10:04:09 compute-0 dnf[30904]: repo-setup-centos-highavailability              6.4 MB/s | 744 kB     00:00
Dec 04 10:04:10 compute-0 dnf[30904]: repo-setup-centos-powertools                     21 MB/s | 7.3 MB     00:00
Dec 04 10:04:10 compute-0 sshd-session[31005]: Invalid user frontend from 107.175.213.239 port 44822
Dec 04 10:04:10 compute-0 sshd-session[31005]: Received disconnect from 107.175.213.239 port 44822:11: Bye Bye [preauth]
Dec 04 10:04:10 compute-0 sshd-session[31005]: Disconnected from invalid user frontend 107.175.213.239 port 44822 [preauth]
Dec 04 10:04:13 compute-0 sshd-session[31012]: Invalid user terraria from 74.249.218.27 port 57716
Dec 04 10:04:13 compute-0 sshd-session[31012]: Received disconnect from 74.249.218.27 port 57716:11: Bye Bye [preauth]
Dec 04 10:04:13 compute-0 sshd-session[31012]: Disconnected from invalid user terraria 74.249.218.27 port 57716 [preauth]
Dec 04 10:04:14 compute-0 dnf[30904]: Extra Packages for Enterprise Linux 9 - x86_64  8.5 MB/s |  20 MB     00:02
Dec 04 10:04:28 compute-0 sshd-session[31014]: Invalid user admin from 59.24.194.207 port 46908
Dec 04 10:04:28 compute-0 sshd-session[31014]: Connection closed by invalid user admin 59.24.194.207 port 46908 [preauth]
Dec 04 10:04:30 compute-0 dnf[30904]: Metadata cache created.
Dec 04 10:04:30 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 04 10:04:30 compute-0 systemd[1]: Finished dnf makecache.
Dec 04 10:04:30 compute-0 systemd[1]: dnf-makecache.service: Consumed 26.373s CPU time.
Dec 04 10:04:44 compute-0 sshd[1008]: Timeout before authentication for connection from 36.212.214.212 to 38.102.83.169, pid = 30895
Dec 04 10:04:52 compute-0 sshd-session[31020]: Accepted publickey for zuul from 192.168.122.30 port 50514 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:04:52 compute-0 systemd-logind[798]: New session 8 of user zuul.
Dec 04 10:04:52 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 04 10:04:52 compute-0 sshd-session[31020]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:04:53 compute-0 python3.9[31173]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:04:54 compute-0 sudo[31352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoxuxeusbokvqevyvcqujaxnbirzrhwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842694.520577-32-120291314259872/AnsiballZ_command.py'
Dec 04 10:04:54 compute-0 sudo[31352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:04:55 compute-0 python3.9[31354]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:05:02 compute-0 sudo[31352]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:03 compute-0 sshd-session[31023]: Connection closed by 192.168.122.30 port 50514
Dec 04 10:05:03 compute-0 sshd-session[31020]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:05:03 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 04 10:05:03 compute-0 systemd[1]: session-8.scope: Consumed 8.031s CPU time.
Dec 04 10:05:03 compute-0 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Dec 04 10:05:03 compute-0 systemd-logind[798]: Removed session 8.
Dec 04 10:05:14 compute-0 sshd-session[31413]: Received disconnect from 217.154.62.22 port 49868:11: Bye Bye [preauth]
Dec 04 10:05:14 compute-0 sshd-session[31413]: Disconnected from authenticating user root 217.154.62.22 port 49868 [preauth]
Dec 04 10:05:18 compute-0 sshd-session[31416]: Accepted publickey for zuul from 192.168.122.30 port 53324 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:05:18 compute-0 systemd-logind[798]: New session 9 of user zuul.
Dec 04 10:05:18 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 04 10:05:18 compute-0 sshd-session[31416]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:05:19 compute-0 python3.9[31570]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 04 10:05:20 compute-0 python3.9[31744]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:05:21 compute-0 sudo[31894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cahevwrpyeiqijkmvvvarhagyrrzodes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842720.9499002-45-247350019814195/AnsiballZ_command.py'
Dec 04 10:05:21 compute-0 sudo[31894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:21 compute-0 python3.9[31896]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:05:21 compute-0 sudo[31894]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:22 compute-0 sudo[32047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pceobozmswzafiqavnnezmubitjtuhwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842721.852307-57-190207205914932/AnsiballZ_stat.py'
Dec 04 10:05:22 compute-0 sudo[32047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:22 compute-0 python3.9[32049]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:05:22 compute-0 sudo[32047]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:23 compute-0 sudo[32199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsboxiddimzmpkauauwmegkrcenmccfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842722.6444492-65-67216684481562/AnsiballZ_file.py'
Dec 04 10:05:23 compute-0 sudo[32199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:23 compute-0 python3.9[32201]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:05:23 compute-0 sudo[32199]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:23 compute-0 sudo[32351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcdzxlpgfpdejrsvlvparsdfatghkadb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842723.4979672-73-136369126107186/AnsiballZ_stat.py'
Dec 04 10:05:23 compute-0 sudo[32351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:24 compute-0 python3.9[32353]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:05:24 compute-0 sudo[32351]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:24 compute-0 sudo[32474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppupwgltklxotwiakycxiohywylerrft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842723.4979672-73-136369126107186/AnsiballZ_copy.py'
Dec 04 10:05:24 compute-0 sudo[32474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:24 compute-0 python3.9[32476]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764842723.4979672-73-136369126107186/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:05:24 compute-0 sudo[32474]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:25 compute-0 sudo[32626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cktrwfhcwglghfxavjvmlvjpqbvvbxvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842724.90809-88-117627343573070/AnsiballZ_setup.py'
Dec 04 10:05:25 compute-0 sudo[32626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:25 compute-0 python3.9[32628]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:05:25 compute-0 sudo[32626]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:26 compute-0 sudo[32782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtdvzskomcejoliwegkjecwlnjdxlhnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842725.8274212-96-205794431562418/AnsiballZ_file.py'
Dec 04 10:05:26 compute-0 sudo[32782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:26 compute-0 python3.9[32784]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:05:26 compute-0 sudo[32782]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:26 compute-0 sudo[32934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvfuaanbymrwapdxvjhwzijmragzlin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842726.4409096-105-89379025615514/AnsiballZ_file.py'
Dec 04 10:05:26 compute-0 sudo[32934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:26 compute-0 python3.9[32936]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:05:27 compute-0 sudo[32934]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:27 compute-0 python3.9[33086]: ansible-ansible.builtin.service_facts Invoked
Dec 04 10:05:31 compute-0 python3.9[33339]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:05:32 compute-0 python3.9[33489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:05:33 compute-0 python3.9[33643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:05:33 compute-0 sudo[33799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lizulqkriloeakrlfylrdtvacpfggvcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842733.6337767-153-229996645878176/AnsiballZ_setup.py'
Dec 04 10:05:33 compute-0 sudo[33799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:34 compute-0 python3.9[33801]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:05:34 compute-0 sudo[33799]: pam_unix(sudo:session): session closed for user root
Dec 04 10:05:34 compute-0 sudo[33883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxdtadllttooqsuawtgffeqwdpvjmshm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842733.6337767-153-229996645878176/AnsiballZ_dnf.py'
Dec 04 10:05:34 compute-0 sudo[33883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:05:35 compute-0 python3.9[33885]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:05:44 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.112.108 to 38.102.83.169, pid = 30901
Dec 04 10:05:44 compute-0 sshd-session[33955]: Invalid user astra from 103.179.218.243 port 40418
Dec 04 10:05:44 compute-0 sshd-session[33955]: Received disconnect from 103.179.218.243 port 40418:11: Bye Bye [preauth]
Dec 04 10:05:44 compute-0 sshd-session[33955]: Disconnected from invalid user astra 103.179.218.243 port 40418 [preauth]
Dec 04 10:06:18 compute-0 systemd[1]: Reloading.
Dec 04 10:06:18 compute-0 systemd-rc-local-generator[34087]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:06:18 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 04 10:06:18 compute-0 systemd[1]: Reloading.
Dec 04 10:06:18 compute-0 systemd-rc-local-generator[34127]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:06:19 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 04 10:06:19 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 04 10:06:19 compute-0 systemd[1]: Reloading.
Dec 04 10:06:19 compute-0 systemd-rc-local-generator[34173]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:06:19 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 04 10:06:19 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 04 10:06:19 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 04 10:06:19 compute-0 sshd-session[34145]: Invalid user debian from 217.154.62.22 port 38936
Dec 04 10:06:19 compute-0 sshd-session[34145]: Received disconnect from 217.154.62.22 port 38936:11: Bye Bye [preauth]
Dec 04 10:06:19 compute-0 sshd-session[34145]: Disconnected from invalid user debian 217.154.62.22 port 38936 [preauth]
Dec 04 10:06:35 compute-0 sshd-session[34225]: Received disconnect from 103.149.86.230 port 54878:11: Bye Bye [preauth]
Dec 04 10:06:35 compute-0 sshd-session[34225]: Disconnected from authenticating user root 103.149.86.230 port 54878 [preauth]
Dec 04 10:06:36 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.112.103 to 38.102.83.169, pid = 31019
Dec 04 10:06:50 compute-0 sshd-session[34274]: Received disconnect from 74.249.218.27 port 60658:11: Bye Bye [preauth]
Dec 04 10:06:50 compute-0 sshd-session[34274]: Disconnected from authenticating user root 74.249.218.27 port 60658 [preauth]
Dec 04 10:06:56 compute-0 sshd-session[34302]: Received disconnect from 107.175.213.239 port 50820:11: Bye Bye [preauth]
Dec 04 10:06:56 compute-0 sshd-session[34302]: Disconnected from authenticating user root 107.175.213.239 port 50820 [preauth]
Dec 04 10:07:00 compute-0 sshd-session[34305]: Connection closed by 101.47.163.20 port 42436 [preauth]
Dec 04 10:07:04 compute-0 sshd[1008]: Timeout before authentication for connection from 123.156.230.101 to 38.102.83.169, pid = 31379
Dec 04 10:07:08 compute-0 sshd-session[34377]: Invalid user g from 103.179.218.243 port 40524
Dec 04 10:07:08 compute-0 sshd-session[34377]: Received disconnect from 103.179.218.243 port 40524:11: Bye Bye [preauth]
Dec 04 10:07:08 compute-0 sshd-session[34377]: Disconnected from invalid user g 103.179.218.243 port 40524 [preauth]
Dec 04 10:07:21 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.118.217 to 38.102.83.169, pid = 31415
Dec 04 10:07:28 compute-0 sshd-session[34434]: Invalid user server from 217.154.62.22 port 40948
Dec 04 10:07:28 compute-0 sshd-session[34434]: Received disconnect from 217.154.62.22 port 40948:11: Bye Bye [preauth]
Dec 04 10:07:28 compute-0 sshd-session[34434]: Disconnected from invalid user server 217.154.62.22 port 40948 [preauth]
Dec 04 10:07:37 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Dec 04 10:07:37 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 10:07:37 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 04 10:07:37 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 10:07:37 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 04 10:07:37 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 10:07:37 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 10:07:37 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 10:07:37 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 04 10:07:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 04 10:07:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 04 10:07:37 compute-0 systemd[1]: Reloading.
Dec 04 10:07:37 compute-0 systemd-rc-local-generator[34551]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:07:38 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 04 10:07:39 compute-0 sudo[33883]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 04 10:07:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 04 10:07:39 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.322s CPU time.
Dec 04 10:07:39 compute-0 systemd[1]: run-r1e00bc08aa7848808495c1f46f230129.service: Deactivated successfully.
Dec 04 10:07:39 compute-0 sudo[35458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqpjhbgrxuxvkqbtrezjubhmnzbskadf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842859.3619044-165-116961013888370/AnsiballZ_command.py'
Dec 04 10:07:39 compute-0 sudo[35458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:39 compute-0 python3.9[35460]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:07:40 compute-0 sudo[35458]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:41 compute-0 sudo[35739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlwkplmycthnjdmkmnlqlpieltzwheab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842861.172437-173-39129773863620/AnsiballZ_selinux.py'
Dec 04 10:07:41 compute-0 sudo[35739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:42 compute-0 python3.9[35741]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 04 10:07:42 compute-0 sudo[35739]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:42 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.107.29 to 38.102.83.169, pid = 33954
Dec 04 10:07:42 compute-0 sudo[35891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etbdapxafakfszoqccwoxpnutksomwrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842862.4815867-184-211161897762161/AnsiballZ_command.py'
Dec 04 10:07:42 compute-0 sudo[35891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:42 compute-0 python3.9[35893]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 04 10:07:44 compute-0 sudo[35891]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:45 compute-0 sudo[36045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbpjdkmeumypojmzgehogedwmnwxmzqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842864.940201-192-150321370388841/AnsiballZ_file.py'
Dec 04 10:07:45 compute-0 sudo[36045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:45 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.117.69 to 38.102.83.169, pid = 33958
Dec 04 10:07:46 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.118.198 to 38.102.83.169, pid = 33957
Dec 04 10:07:46 compute-0 python3.9[36047]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:07:46 compute-0 sudo[36045]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:47 compute-0 sudo[36197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnohxpfemeqaubiplxcnmgbomzaiaxly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842867.1314027-200-190069384946245/AnsiballZ_mount.py'
Dec 04 10:07:47 compute-0 sudo[36197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:47 compute-0 python3.9[36199]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 04 10:07:47 compute-0 sudo[36197]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:48 compute-0 sudo[36349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxnefycaubqawvnntxvbpoljbhurjcbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842868.5734184-228-131690200984051/AnsiballZ_file.py'
Dec 04 10:07:48 compute-0 sudo[36349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:49 compute-0 python3.9[36351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:07:49 compute-0 sudo[36349]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:49 compute-0 sudo[36501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kndzstdwxczxsddyvsexcaqhxdsoxovx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842869.1995506-236-211587129898094/AnsiballZ_stat.py'
Dec 04 10:07:49 compute-0 sudo[36501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:49 compute-0 python3.9[36503]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:07:49 compute-0 sudo[36501]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:49 compute-0 sudo[36624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-holwsmjzzgpcnivuyehhwphzacrojkhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842869.1995506-236-211587129898094/AnsiballZ_copy.py'
Dec 04 10:07:50 compute-0 sudo[36624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:50 compute-0 python3.9[36626]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764842869.1995506-236-211587129898094/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:07:50 compute-0 sudo[36624]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:50 compute-0 sudo[36776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbqgjlervejgzcopiygijknttiabxsdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842870.5837996-260-103973937829772/AnsiballZ_stat.py'
Dec 04 10:07:50 compute-0 sudo[36776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:51 compute-0 python3.9[36778]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:07:51 compute-0 sudo[36776]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:51 compute-0 sudo[36928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbcscolnmbccebivrsjdcbgrppdfftxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842871.1980386-268-60821224515050/AnsiballZ_command.py'
Dec 04 10:07:51 compute-0 sudo[36928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:51 compute-0 python3.9[36930]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:07:51 compute-0 sudo[36928]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:51 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.116.173 to 38.102.83.169, pid = 33964
Dec 04 10:07:52 compute-0 sudo[37081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lipdfrhfuufgtycizzvmfeztesfrvolf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842871.873441-276-132217743650632/AnsiballZ_file.py'
Dec 04 10:07:52 compute-0 sudo[37081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:52 compute-0 python3.9[37083]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:07:52 compute-0 sudo[37081]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:53 compute-0 sudo[37233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzphexbwgrlibubqcmtupqozufjamyao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842872.6993737-287-148022342357325/AnsiballZ_getent.py'
Dec 04 10:07:53 compute-0 sudo[37233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:53 compute-0 python3.9[37235]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 04 10:07:53 compute-0 sudo[37233]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:53 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:07:53 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:07:53 compute-0 sudo[37387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhulwjhfogykiwovnexyvmezuuzlxyqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842873.5185628-295-179821957067693/AnsiballZ_group.py'
Dec 04 10:07:53 compute-0 sudo[37387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:58 compute-0 python3.9[37389]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 04 10:07:58 compute-0 groupadd[37390]: group added to /etc/group: name=qemu, GID=107
Dec 04 10:07:58 compute-0 groupadd[37390]: group added to /etc/gshadow: name=qemu
Dec 04 10:07:58 compute-0 groupadd[37390]: new group: name=qemu, GID=107
Dec 04 10:07:58 compute-0 sudo[37387]: pam_unix(sudo:session): session closed for user root
Dec 04 10:07:59 compute-0 sudo[37545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enykmupzpnabqlrloogecscgguhyywvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842878.949488-303-20197137704910/AnsiballZ_user.py'
Dec 04 10:07:59 compute-0 sudo[37545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:07:59 compute-0 sshd[1008]: Timeout before authentication for connection from 120.48.85.137 to 38.102.83.169, pid = 34016
Dec 04 10:07:59 compute-0 python3.9[37547]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 04 10:07:59 compute-0 useradd[37549]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 04 10:07:59 compute-0 sudo[37545]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:00 compute-0 sudo[37705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqbkpwlakfhxgziobnzjkvwzxpmdapzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842879.897423-311-122740442812614/AnsiballZ_getent.py'
Dec 04 10:08:00 compute-0 sudo[37705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:00 compute-0 python3.9[37707]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 04 10:08:00 compute-0 sudo[37705]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:00 compute-0 sudo[37858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejbyxtekzfsrrekhndeigdoaywynpvip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842880.53273-319-109419310710692/AnsiballZ_group.py'
Dec 04 10:08:00 compute-0 sudo[37858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:01 compute-0 python3.9[37860]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 04 10:08:01 compute-0 groupadd[37861]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 04 10:08:01 compute-0 groupadd[37861]: group added to /etc/gshadow: name=hugetlbfs
Dec 04 10:08:01 compute-0 groupadd[37861]: new group: name=hugetlbfs, GID=42477
Dec 04 10:08:01 compute-0 sudo[37858]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:01 compute-0 sudo[38018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhjreioyijyhugcxdelyvvcmxeyvdjei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842881.280834-328-119942724826074/AnsiballZ_file.py'
Dec 04 10:08:01 compute-0 sudo[38018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:01 compute-0 sshd-session[37968]: Invalid user frontend from 74.249.218.27 port 55746
Dec 04 10:08:01 compute-0 sshd-session[37968]: Received disconnect from 74.249.218.27 port 55746:11: Bye Bye [preauth]
Dec 04 10:08:01 compute-0 sshd-session[37968]: Disconnected from invalid user frontend 74.249.218.27 port 55746 [preauth]
Dec 04 10:08:01 compute-0 python3.9[38020]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 04 10:08:01 compute-0 sudo[38018]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:02 compute-0 sudo[38170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sotwalzwwxqezuqgtgkwylxweovvwfol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842882.1097708-339-147136250009089/AnsiballZ_dnf.py'
Dec 04 10:08:02 compute-0 sudo[38170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:02 compute-0 python3.9[38172]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:08:07 compute-0 sudo[38170]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:08 compute-0 sudo[38323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxzelyuhnzuoqhkbesesgptntdoysfjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842888.1133142-347-72099960952894/AnsiballZ_file.py'
Dec 04 10:08:08 compute-0 sudo[38323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:08 compute-0 python3.9[38325]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:08:08 compute-0 sudo[38323]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:09 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.114.205 to 38.102.83.169, pid = 34038
Dec 04 10:08:09 compute-0 sudo[38475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxezrqnyecgnombgxsgfijpeludzbsur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842888.8522809-355-164129890382118/AnsiballZ_stat.py'
Dec 04 10:08:09 compute-0 sudo[38475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:09 compute-0 python3.9[38477]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:08:09 compute-0 sudo[38475]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:09 compute-0 sudo[38598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcrwhatnnghpgdibmhjnfgrmjvanasno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842888.8522809-355-164129890382118/AnsiballZ_copy.py'
Dec 04 10:08:09 compute-0 sudo[38598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:09 compute-0 python3.9[38600]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764842888.8522809-355-164129890382118/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:08:10 compute-0 sudo[38598]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:10 compute-0 sudo[38750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxkvhddsphnhzptimunrwsrrxwamaajz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842890.1958427-370-52100216123583/AnsiballZ_systemd.py'
Dec 04 10:08:10 compute-0 sudo[38750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:11 compute-0 python3.9[38752]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:08:11 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 04 10:08:11 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 04 10:08:11 compute-0 kernel: Bridge firewalling registered
Dec 04 10:08:11 compute-0 systemd-modules-load[38756]: Inserted module 'br_netfilter'
Dec 04 10:08:11 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 04 10:08:11 compute-0 sudo[38750]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:11 compute-0 sudo[38910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdvvevaxewpjurjdwzvazmpkponhkeno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842891.4844732-378-209882278444776/AnsiballZ_stat.py'
Dec 04 10:08:11 compute-0 sudo[38910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:11 compute-0 python3.9[38912]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:08:11 compute-0 sudo[38910]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:12 compute-0 sudo[39033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsshahhszgkdbywzqslktyedthcgcxva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842891.4844732-378-209882278444776/AnsiballZ_copy.py'
Dec 04 10:08:12 compute-0 sudo[39033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:12 compute-0 python3.9[39035]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764842891.4844732-378-209882278444776/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:08:12 compute-0 sudo[39033]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:13 compute-0 sudo[39185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oattixwjxocwrbxiioyiygjtnsvaqzjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842892.8968656-396-89706926989317/AnsiballZ_dnf.py'
Dec 04 10:08:13 compute-0 sudo[39185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:13 compute-0 python3.9[39187]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:08:15 compute-0 sshd-session[39189]: Invalid user bot from 103.149.86.230 port 57408
Dec 04 10:08:15 compute-0 sshd-session[39189]: Received disconnect from 103.149.86.230 port 57408:11: Bye Bye [preauth]
Dec 04 10:08:15 compute-0 sshd-session[39189]: Disconnected from invalid user bot 103.149.86.230 port 57408 [preauth]
Dec 04 10:08:25 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 04 10:08:25 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 04 10:08:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 04 10:08:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 04 10:08:26 compute-0 systemd[1]: Reloading.
Dec 04 10:08:26 compute-0 systemd-rc-local-generator[39253]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:08:26 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 04 10:08:29 compute-0 sudo[39185]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:30 compute-0 python3.9[41220]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:08:31 compute-0 python3.9[42183]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 04 10:08:31 compute-0 python3.9[42856]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:08:32 compute-0 sudo[43379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggxzidjjinesoctostujxvblghzrcbwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842912.260361-435-34979821304407/AnsiballZ_command.py'
Dec 04 10:08:32 compute-0 sudo[43379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:32 compute-0 python3.9[43402]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:08:32 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 04 10:08:33 compute-0 sshd-session[42917]: Invalid user free from 103.179.218.243 port 40628
Dec 04 10:08:33 compute-0 sshd-session[42917]: Received disconnect from 103.179.218.243 port 40628:11: Bye Bye [preauth]
Dec 04 10:08:33 compute-0 sshd-session[42917]: Disconnected from invalid user free 103.179.218.243 port 40628 [preauth]
Dec 04 10:08:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 04 10:08:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 04 10:08:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.569s CPU time.
Dec 04 10:08:33 compute-0 systemd[1]: run-r44a3b065544d4269a3621b4d4ff8ccc5.service: Deactivated successfully.
Dec 04 10:08:33 compute-0 systemd[1]: Starting Authorization Manager...
Dec 04 10:08:33 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 04 10:08:33 compute-0 polkitd[43629]: Started polkitd version 0.117
Dec 04 10:08:33 compute-0 polkitd[43629]: Loading rules from directory /etc/polkit-1/rules.d
Dec 04 10:08:33 compute-0 polkitd[43629]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 04 10:08:33 compute-0 polkitd[43629]: Finished loading, compiling and executing 2 rules
Dec 04 10:08:33 compute-0 systemd[1]: Started Authorization Manager.
Dec 04 10:08:33 compute-0 polkitd[43629]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 04 10:08:33 compute-0 sudo[43379]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:34 compute-0 sudo[43797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhfdxthceyydemrqwqkaqyprrsmxgxmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842913.8682368-444-66795330284718/AnsiballZ_systemd.py'
Dec 04 10:08:34 compute-0 sudo[43797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:34 compute-0 python3.9[43799]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:08:34 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.90.30 to 38.102.83.169, pid = 34221
Dec 04 10:08:35 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 04 10:08:35 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 04 10:08:35 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 04 10:08:35 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 04 10:08:35 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 04 10:08:35 compute-0 sudo[43797]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:36 compute-0 python3.9[43961]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 04 10:08:38 compute-0 sudo[44111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaoytmucokbkrnidfmczzwksdzgzohfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842918.3977666-501-126909650181356/AnsiballZ_systemd.py'
Dec 04 10:08:38 compute-0 sudo[44111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:39 compute-0 python3.9[44113]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:08:39 compute-0 systemd[1]: Reloading.
Dec 04 10:08:39 compute-0 systemd-rc-local-generator[44144]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:08:39 compute-0 sudo[44111]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:40 compute-0 sudo[44300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syisvodiuyvdpqkssejgwdwxgtnxajxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842919.844637-501-83506184192849/AnsiballZ_systemd.py'
Dec 04 10:08:40 compute-0 sudo[44300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:40 compute-0 python3.9[44302]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:08:40 compute-0 systemd[1]: Reloading.
Dec 04 10:08:40 compute-0 systemd-rc-local-generator[44325]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:08:40 compute-0 sudo[44300]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:41 compute-0 sudo[44489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkmewzwtqbyamsamtkgdpdvihmfmyiln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842920.8987832-517-52806119839032/AnsiballZ_command.py'
Dec 04 10:08:41 compute-0 sudo[44489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:41 compute-0 sshd[1008]: Timeout before authentication for connection from 180.184.134.158 to 38.102.83.169, pid = 34242
Dec 04 10:08:41 compute-0 python3.9[44491]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:08:41 compute-0 sudo[44489]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:41 compute-0 sudo[44642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-colpznhzcodzhrronzgeewhvgsrxbohk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842921.6112978-525-37640273515677/AnsiballZ_command.py'
Dec 04 10:08:41 compute-0 sudo[44642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:42 compute-0 python3.9[44644]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:08:42 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 04 10:08:42 compute-0 sudo[44642]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:42 compute-0 sudo[44797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkksmfscoqdkukyymetdkecgcwiynizd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842922.3110776-533-23338357625881/AnsiballZ_command.py'
Dec 04 10:08:42 compute-0 sudo[44797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:42 compute-0 python3.9[44799]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:08:43 compute-0 sshd-session[44746]: Invalid user cgpexpert from 217.154.62.22 port 52938
Dec 04 10:08:43 compute-0 sshd-session[44746]: Received disconnect from 217.154.62.22 port 52938:11: Bye Bye [preauth]
Dec 04 10:08:43 compute-0 sshd-session[44746]: Disconnected from invalid user cgpexpert 217.154.62.22 port 52938 [preauth]
Dec 04 10:08:44 compute-0 sudo[44797]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:44 compute-0 sudo[44959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeizlrcimwbnrhsmikbfuecgcmnbirpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842924.4826033-541-11567078462840/AnsiballZ_command.py'
Dec 04 10:08:44 compute-0 sudo[44959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:44 compute-0 python3.9[44961]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:08:44 compute-0 sudo[44959]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:45 compute-0 sudo[45112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frrhvcylgntrzglngdhedefqokeuwocs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842925.130355-549-149134811755394/AnsiballZ_systemd.py'
Dec 04 10:08:45 compute-0 sudo[45112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:45 compute-0 python3.9[45114]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:08:45 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 04 10:08:45 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 04 10:08:45 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 04 10:08:45 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 04 10:08:45 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 04 10:08:45 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 04 10:08:45 compute-0 sudo[45112]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:46 compute-0 sshd-session[31419]: Connection closed by 192.168.122.30 port 53324
Dec 04 10:08:46 compute-0 sshd-session[31416]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:08:46 compute-0 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Dec 04 10:08:46 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 04 10:08:46 compute-0 systemd[1]: session-9.scope: Consumed 2min 18.039s CPU time.
Dec 04 10:08:46 compute-0 systemd-logind[798]: Removed session 9.
Dec 04 10:08:50 compute-0 sshd-session[45144]: Invalid user vtatis from 107.175.213.239 port 60042
Dec 04 10:08:50 compute-0 sshd-session[45144]: Received disconnect from 107.175.213.239 port 60042:11: Bye Bye [preauth]
Dec 04 10:08:50 compute-0 sshd-session[45144]: Disconnected from invalid user vtatis 107.175.213.239 port 60042 [preauth]
Dec 04 10:08:52 compute-0 sshd-session[45146]: Accepted publickey for zuul from 192.168.122.30 port 39738 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:08:52 compute-0 systemd-logind[798]: New session 10 of user zuul.
Dec 04 10:08:52 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 04 10:08:52 compute-0 sshd-session[45146]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:08:53 compute-0 python3.9[45299]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:08:54 compute-0 sudo[45453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtvtknkkzzhjzpmplzhwtrqdyalirehn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842933.7814982-36-32847364536653/AnsiballZ_getent.py'
Dec 04 10:08:54 compute-0 sudo[45453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:54 compute-0 python3.9[45455]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 04 10:08:54 compute-0 sudo[45453]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:55 compute-0 sudo[45606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkrdjcdaavjshfhezxxjhxjxjiwvdljk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842934.725531-44-157466351823759/AnsiballZ_group.py'
Dec 04 10:08:55 compute-0 sudo[45606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:55 compute-0 python3.9[45608]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 04 10:08:55 compute-0 groupadd[45609]: group added to /etc/group: name=openvswitch, GID=42476
Dec 04 10:08:55 compute-0 groupadd[45609]: group added to /etc/gshadow: name=openvswitch
Dec 04 10:08:55 compute-0 groupadd[45609]: new group: name=openvswitch, GID=42476
Dec 04 10:08:55 compute-0 sudo[45606]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:56 compute-0 sudo[45764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owzaffknusausrcrzelepqbjaqryxzey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842935.6300974-52-186921834162186/AnsiballZ_user.py'
Dec 04 10:08:56 compute-0 sudo[45764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:56 compute-0 python3.9[45766]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 04 10:08:56 compute-0 useradd[45768]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 04 10:08:56 compute-0 useradd[45768]: add 'openvswitch' to group 'hugetlbfs'
Dec 04 10:08:56 compute-0 useradd[45768]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 04 10:08:56 compute-0 sudo[45764]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:56 compute-0 sudo[45924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zodmfapdhfjzetblneaytlwgegecfdnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842936.6958864-62-39805095818280/AnsiballZ_setup.py'
Dec 04 10:08:56 compute-0 sudo[45924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:57 compute-0 python3.9[45926]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:08:57 compute-0 sudo[45924]: pam_unix(sudo:session): session closed for user root
Dec 04 10:08:57 compute-0 sudo[46008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pidmppyuleugetvjwgzovqwlembhvbmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842936.6958864-62-39805095818280/AnsiballZ_dnf.py'
Dec 04 10:08:57 compute-0 sudo[46008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:08:58 compute-0 python3.9[46010]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 04 10:09:00 compute-0 sshd-session[46012]: Invalid user ftpuser from 45.135.232.92 port 49406
Dec 04 10:09:00 compute-0 sshd-session[46012]: Connection reset by invalid user ftpuser 45.135.232.92 port 49406 [preauth]
Dec 04 10:09:00 compute-0 sudo[46008]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:01 compute-0 sudo[46176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyyslujftlqdfqnrbdnztculaxsqfwmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842940.8962903-76-198699532173105/AnsiballZ_dnf.py'
Dec 04 10:09:01 compute-0 sudo[46176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:01 compute-0 python3.9[46178]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:09:02 compute-0 sshd-session[46025]: Invalid user admin from 45.135.232.92 port 49412
Dec 04 10:09:02 compute-0 sshd-session[46025]: Connection reset by invalid user admin 45.135.232.92 port 49412 [preauth]
Dec 04 10:09:05 compute-0 sshd-session[46184]: Invalid user ftpuser from 45.135.232.92 port 49416
Dec 04 10:09:05 compute-0 sshd-session[46184]: Connection reset by invalid user ftpuser 45.135.232.92 port 49416 [preauth]
Dec 04 10:09:07 compute-0 sshd-session[46195]: Connection reset by authenticating user root 45.135.232.92 port 40494 [preauth]
Dec 04 10:09:09 compute-0 sshd-session[46197]: Invalid user admin from 45.135.232.92 port 40508
Dec 04 10:09:09 compute-0 sshd-session[46197]: Connection reset by invalid user admin 45.135.232.92 port 40508 [preauth]
Dec 04 10:09:14 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Dec 04 10:09:14 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 10:09:14 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 04 10:09:14 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 10:09:14 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 04 10:09:14 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 10:09:14 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 10:09:14 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 10:09:14 compute-0 groupadd[46207]: group added to /etc/group: name=unbound, GID=993
Dec 04 10:09:14 compute-0 groupadd[46207]: group added to /etc/gshadow: name=unbound
Dec 04 10:09:14 compute-0 groupadd[46207]: new group: name=unbound, GID=993
Dec 04 10:09:14 compute-0 useradd[46214]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 04 10:09:14 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 04 10:09:14 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 04 10:09:15 compute-0 sshd-session[46245]: Invalid user server from 74.249.218.27 port 48038
Dec 04 10:09:15 compute-0 sshd-session[46245]: Received disconnect from 74.249.218.27 port 48038:11: Bye Bye [preauth]
Dec 04 10:09:15 compute-0 sshd-session[46245]: Disconnected from invalid user server 74.249.218.27 port 48038 [preauth]
Dec 04 10:09:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 04 10:09:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 04 10:09:16 compute-0 systemd[1]: Reloading.
Dec 04 10:09:16 compute-0 systemd-rc-local-generator[46707]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:09:16 compute-0 systemd-sysv-generator[46714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:09:16 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 04 10:09:17 compute-0 sudo[46176]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 04 10:09:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 04 10:09:17 compute-0 systemd[1]: run-r0724c7fe2f3d4c34a419b6a43cd366d1.service: Deactivated successfully.
Dec 04 10:09:18 compute-0 sudo[47281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efpdhzohlqmgqhfrucdhjebwirxtdwrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842957.5049675-84-35763272790025/AnsiballZ_systemd.py'
Dec 04 10:09:18 compute-0 sudo[47281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:18 compute-0 python3.9[47283]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 04 10:09:18 compute-0 systemd[1]: Reloading.
Dec 04 10:09:18 compute-0 systemd-rc-local-generator[47313]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:09:18 compute-0 systemd-sysv-generator[47317]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:09:18 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 04 10:09:18 compute-0 chown[47326]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 04 10:09:18 compute-0 ovs-ctl[47331]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 04 10:09:19 compute-0 ovs-ctl[47331]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 04 10:09:19 compute-0 ovs-ctl[47331]: Starting ovsdb-server [  OK  ]
Dec 04 10:09:19 compute-0 ovs-vsctl[47380]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 04 10:09:19 compute-0 ovs-vsctl[47400]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"565580d5-3422-4e11-b563-3f1a3db67238\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 04 10:09:19 compute-0 ovs-ctl[47331]: Configuring Open vSwitch system IDs [  OK  ]
Dec 04 10:09:19 compute-0 ovs-ctl[47331]: Enabling remote OVSDB managers [  OK  ]
Dec 04 10:09:19 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 04 10:09:19 compute-0 ovs-vsctl[47406]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 04 10:09:19 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 04 10:09:19 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 04 10:09:19 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 04 10:09:19 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 04 10:09:19 compute-0 ovs-ctl[47450]: Inserting openvswitch module [  OK  ]
Dec 04 10:09:19 compute-0 ovs-ctl[47419]: Starting ovs-vswitchd [  OK  ]
Dec 04 10:09:19 compute-0 ovs-vsctl[47467]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 04 10:09:19 compute-0 ovs-ctl[47419]: Enabling remote OVSDB managers [  OK  ]
Dec 04 10:09:19 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 04 10:09:19 compute-0 systemd[1]: Starting Open vSwitch...
Dec 04 10:09:19 compute-0 systemd[1]: Finished Open vSwitch.
Dec 04 10:09:19 compute-0 sudo[47281]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:20 compute-0 python3.9[47619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:09:21 compute-0 sudo[47769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sreibxrfrfkoljuhwsrdfdlpacqqodwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842960.9247894-102-149425229620443/AnsiballZ_sefcontext.py'
Dec 04 10:09:21 compute-0 sudo[47769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:21 compute-0 python3.9[47771]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 04 10:09:22 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Dec 04 10:09:22 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 10:09:22 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 04 10:09:22 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 10:09:22 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 04 10:09:22 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 10:09:22 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 10:09:22 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 10:09:23 compute-0 sudo[47769]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:24 compute-0 python3.9[47927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:09:24 compute-0 sudo[48084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bejinqyifownhcjqkzjpynirfesqkoww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842964.4979942-120-227355444084170/AnsiballZ_dnf.py'
Dec 04 10:09:24 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 04 10:09:24 compute-0 sudo[48084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:25 compute-0 python3.9[48086]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:09:26 compute-0 sudo[48084]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:27 compute-0 sudo[48237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfoupegitsokieyetvikunlhsjtliofx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842966.8566263-128-224943203566377/AnsiballZ_command.py'
Dec 04 10:09:27 compute-0 sudo[48237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:27 compute-0 python3.9[48239]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:09:28 compute-0 sudo[48237]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:28 compute-0 sudo[48524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqrijrckxkvptqobuwronivkddfjresp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842968.4501872-136-281022303076137/AnsiballZ_file.py'
Dec 04 10:09:28 compute-0 sudo[48524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:29 compute-0 python3.9[48526]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 04 10:09:29 compute-0 sudo[48524]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:29 compute-0 python3.9[48676]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:09:30 compute-0 sudo[48828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxjbrcjdytnmkwfqmgvvwnsfhmyisrsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842970.1833825-152-115313938630529/AnsiballZ_dnf.py'
Dec 04 10:09:30 compute-0 sudo[48828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:30 compute-0 python3.9[48830]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:09:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 04 10:09:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 04 10:09:33 compute-0 systemd[1]: Reloading.
Dec 04 10:09:33 compute-0 systemd-sysv-generator[48871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:09:33 compute-0 systemd-rc-local-generator[48868]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:09:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 04 10:09:33 compute-0 sudo[48828]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 04 10:09:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 04 10:09:33 compute-0 systemd[1]: run-r28b6149121b6431190d37f343739788b.service: Deactivated successfully.
Dec 04 10:09:34 compute-0 sshd-session[48836]: Invalid user terraria from 103.149.86.230 port 37364
Dec 04 10:09:34 compute-0 sudo[49148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkynxommcjymogfiiryslrzudzwyylnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842974.0206292-160-276250839359979/AnsiballZ_systemd.py'
Dec 04 10:09:34 compute-0 sudo[49148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:34 compute-0 sshd-session[48836]: Received disconnect from 103.149.86.230 port 37364:11: Bye Bye [preauth]
Dec 04 10:09:34 compute-0 sshd-session[48836]: Disconnected from invalid user terraria 103.149.86.230 port 37364 [preauth]
Dec 04 10:09:34 compute-0 python3.9[49150]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:09:34 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 04 10:09:34 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 04 10:09:34 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 04 10:09:34 compute-0 NetworkManager[7184]: <info>  [1764842974.7492] caught SIGTERM, shutting down normally.
Dec 04 10:09:34 compute-0 NetworkManager[7184]: <info>  [1764842974.7508] dhcp4 (eth0): canceled DHCP transaction
Dec 04 10:09:34 compute-0 NetworkManager[7184]: <info>  [1764842974.7508] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 04 10:09:34 compute-0 NetworkManager[7184]: <info>  [1764842974.7508] dhcp4 (eth0): state changed no lease
Dec 04 10:09:34 compute-0 NetworkManager[7184]: <info>  [1764842974.7512] manager: NetworkManager state is now CONNECTED_SITE
Dec 04 10:09:34 compute-0 systemd[1]: Stopping Network Manager...
Dec 04 10:09:34 compute-0 NetworkManager[7184]: <info>  [1764842974.7606] exiting (success)
Dec 04 10:09:34 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 04 10:09:34 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 04 10:09:34 compute-0 systemd[1]: Stopped Network Manager.
Dec 04 10:09:34 compute-0 systemd[1]: NetworkManager.service: Consumed 14.353s CPU time, 4.1M memory peak, read 0B from disk, written 34.0K to disk.
Dec 04 10:09:34 compute-0 systemd[1]: Starting Network Manager...
Dec 04 10:09:34 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.8348] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:df4fb9d0-81a4-4e5e-8b88-c0920d7ba5e9)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.8350] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.8421] manager[0x55fd3257b090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 04 10:09:34 compute-0 systemd[1]: Starting Hostname Service...
Dec 04 10:09:34 compute-0 systemd[1]: Started Hostname Service.
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9309] hostname: hostname: using hostnamed
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9311] hostname: static hostname changed from (none) to "compute-0"
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9316] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9321] manager[0x55fd3257b090]: rfkill: Wi-Fi hardware radio set enabled
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9321] manager[0x55fd3257b090]: rfkill: WWAN hardware radio set enabled
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9345] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9356] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9357] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9358] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9358] manager: Networking is enabled by state file
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9361] settings: Loaded settings plugin: keyfile (internal)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9366] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9394] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9401] dhcp: init: Using DHCP client 'internal'
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9403] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9407] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9410] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9416] device (lo): Activation: starting connection 'lo' (3cd632aa-e4f7-4e63-bb4d-c1d9ec185b32)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9421] device (eth0): carrier: link connected
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9424] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9427] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9427] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9431] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9436] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9440] device (eth1): carrier: link connected
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9443] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9446] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53) (indicated)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9446] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9449] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9454] device (eth1): Activation: starting connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec 04 10:09:34 compute-0 systemd[1]: Started Network Manager.
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9458] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9463] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9465] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9466] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9468] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9470] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9473] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9475] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9478] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9483] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9486] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9494] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9506] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9515] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9516] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9519] device (lo): Activation: successful, device activated.
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9524] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9528] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9599] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9603] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9604] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9606] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9608] device (eth1): Activation: successful, device activated.
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9615] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9616] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9618] manager: NetworkManager state is now CONNECTED_SITE
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9620] device (eth0): Activation: successful, device activated.
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9624] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 04 10:09:34 compute-0 NetworkManager[49155]: <info>  [1764842974.9626] manager: startup complete
Dec 04 10:09:34 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 04 10:09:34 compute-0 sudo[49148]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:35 compute-0 sudo[49374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnysouzhdazahjtpmacmvudnjbdhjrjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842975.172647-168-200215257651672/AnsiballZ_dnf.py'
Dec 04 10:09:35 compute-0 sudo[49374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:35 compute-0 python3.9[49376]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:09:39 compute-0 sshd-session[48832]: Invalid user ubnt from 180.183.245.232 port 32687
Dec 04 10:09:41 compute-0 sshd-session[48832]: Connection closed by invalid user ubnt 180.183.245.232 port 32687 [preauth]
Dec 04 10:09:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 04 10:09:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 04 10:09:42 compute-0 systemd[1]: Reloading.
Dec 04 10:09:42 compute-0 systemd-rc-local-generator[49426]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:09:42 compute-0 systemd-sysv-generator[49429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:09:42 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 04 10:09:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 04 10:09:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 04 10:09:43 compute-0 systemd[1]: run-re1c0ad4d9715485d9f2b6b42f6a21cf0.service: Deactivated successfully.
Dec 04 10:09:43 compute-0 sudo[49374]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:44 compute-0 sudo[49831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qryfbpydjixriqfvmbpvzbdtbdwxzpbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842984.0228992-180-269062013531230/AnsiballZ_stat.py'
Dec 04 10:09:44 compute-0 sudo[49831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:44 compute-0 python3.9[49833]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:09:44 compute-0 sudo[49831]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:45 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 04 10:09:45 compute-0 sudo[49983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpmecsuqlawubawznutfasuatbkaeurh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842984.7009842-189-78099478342319/AnsiballZ_ini_file.py'
Dec 04 10:09:45 compute-0 sudo[49983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:45 compute-0 python3.9[49985]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:09:45 compute-0 sudo[49983]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:45 compute-0 sudo[50137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cunksavebdjiwhegcmhersbuhdzhkovu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842985.5787764-199-110080442715450/AnsiballZ_ini_file.py'
Dec 04 10:09:45 compute-0 sudo[50137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:46 compute-0 python3.9[50139]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:09:46 compute-0 sudo[50137]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:46 compute-0 sudo[50289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxqlesajtdlyaqygyfdkfhafzlteujcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842986.1427107-199-146585109377662/AnsiballZ_ini_file.py'
Dec 04 10:09:46 compute-0 sudo[50289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:46 compute-0 python3.9[50291]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:09:46 compute-0 sudo[50289]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:47 compute-0 sudo[50441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leldmtpnpuctqcykdqyldkzxgrzjhafh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842986.7506046-214-189930630637199/AnsiballZ_ini_file.py'
Dec 04 10:09:47 compute-0 sudo[50441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:47 compute-0 python3.9[50443]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:09:47 compute-0 sudo[50441]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:47 compute-0 sudo[50593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drpvxfdisqqqgpyzjhndkqdtpcbznbgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842987.3892245-214-17843770989506/AnsiballZ_ini_file.py'
Dec 04 10:09:47 compute-0 sudo[50593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:47 compute-0 python3.9[50595]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:09:47 compute-0 sudo[50593]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:48 compute-0 sudo[50745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfwyaqxiazftitqewoyfeojacljkxfyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842988.050382-229-17260998974331/AnsiballZ_stat.py'
Dec 04 10:09:48 compute-0 sudo[50745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:48 compute-0 python3.9[50747]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:09:48 compute-0 sudo[50745]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:48 compute-0 sudo[50868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqdrtvqiwgnwygnlinsiflhdhrcxxfiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842988.050382-229-17260998974331/AnsiballZ_copy.py'
Dec 04 10:09:48 compute-0 sudo[50868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:49 compute-0 python3.9[50870]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764842988.050382-229-17260998974331/.source _original_basename=.499q43_d follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:09:49 compute-0 sudo[50868]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:49 compute-0 sudo[51020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agkosvnkbhbbsuxwrityjywgeeqkqpcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842989.375816-244-267252576011383/AnsiballZ_file.py'
Dec 04 10:09:49 compute-0 sudo[51020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:49 compute-0 python3.9[51022]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:09:49 compute-0 sudo[51020]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:50 compute-0 sudo[51172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlwjkiqkfkokmqkuofrgusjabykqbbof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842990.020959-252-145884992175823/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 04 10:09:50 compute-0 sudo[51172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:50 compute-0 python3.9[51176]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 04 10:09:50 compute-0 sudo[51172]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:51 compute-0 sudo[51326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwttyrtrsjodkqbdadetvbpkgkictkmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842990.8315759-261-102033760950503/AnsiballZ_file.py'
Dec 04 10:09:51 compute-0 sudo[51326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:51 compute-0 python3.9[51328]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:09:51 compute-0 sudo[51326]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:51 compute-0 sshd-session[51173]: Invalid user vpnuser from 103.179.218.243 port 40728
Dec 04 10:09:51 compute-0 sudo[51478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpytlrqfyqknvoaxrakjoxhewaqdxvee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842991.583518-271-151150680199813/AnsiballZ_stat.py'
Dec 04 10:09:51 compute-0 sudo[51478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:51 compute-0 sshd-session[51173]: Received disconnect from 103.179.218.243 port 40728:11: Bye Bye [preauth]
Dec 04 10:09:51 compute-0 sshd-session[51173]: Disconnected from invalid user vpnuser 103.179.218.243 port 40728 [preauth]
Dec 04 10:09:52 compute-0 sudo[51478]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:52 compute-0 sudo[51601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chxylztimspsjdbujotwojtyjjaqvtig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842991.583518-271-151150680199813/AnsiballZ_copy.py'
Dec 04 10:09:52 compute-0 sudo[51601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:52 compute-0 sudo[51601]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:53 compute-0 sudo[51753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrnvcmmvxrsjjadlixgmfzdmwzeesxle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842992.8136418-286-232440038538191/AnsiballZ_slurp.py'
Dec 04 10:09:53 compute-0 sudo[51753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:53 compute-0 python3.9[51755]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 04 10:09:53 compute-0 sudo[51753]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:54 compute-0 sudo[51928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oigmepoththjusttpjissmfnqnmcjeno ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842993.616241-295-139400037214218/async_wrapper.py j993236775427 300 /home/zuul/.ansible/tmp/ansible-tmp-1764842993.616241-295-139400037214218/AnsiballZ_edpm_os_net_config.py _'
Dec 04 10:09:54 compute-0 sudo[51928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:54 compute-0 ansible-async_wrapper.py[51930]: Invoked with j993236775427 300 /home/zuul/.ansible/tmp/ansible-tmp-1764842993.616241-295-139400037214218/AnsiballZ_edpm_os_net_config.py _
Dec 04 10:09:54 compute-0 ansible-async_wrapper.py[51933]: Starting module and watcher
Dec 04 10:09:54 compute-0 ansible-async_wrapper.py[51933]: Start watching 51934 (300)
Dec 04 10:09:54 compute-0 ansible-async_wrapper.py[51934]: Start module (51934)
Dec 04 10:09:54 compute-0 ansible-async_wrapper.py[51930]: Return async_wrapper task started.
Dec 04 10:09:54 compute-0 sudo[51928]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:54 compute-0 python3.9[51935]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 04 10:09:55 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 04 10:09:55 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 04 10:09:55 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 04 10:09:55 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 04 10:09:55 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6019] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6036] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6637] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6638] audit: op="connection-add" uuid="a15c4f20-e55d-495f-8cf8-1789ffb767fc" name="br-ex-br" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6653] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6654] audit: op="connection-add" uuid="3501e357-6a24-4589-b09a-4e45df7b9f1e" name="br-ex-port" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6665] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6665] audit: op="connection-add" uuid="f157efaf-88c1-498e-b7be-9797351e9cc5" name="eth1-port" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6676] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6677] audit: op="connection-add" uuid="7943381a-aa3c-4448-9073-3da4ad63fbc2" name="vlan20-port" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6688] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6688] audit: op="connection-add" uuid="2a86e840-97a9-4eb1-a01a-66b8ba93f9a7" name="vlan21-port" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6699] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6700] audit: op="connection-add" uuid="5a561042-8249-41c1-8751-d90cd23df0d5" name="vlan22-port" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6709] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6710] audit: op="connection-add" uuid="51befe0a-9ddc-4531-a14f-c5a733b7f996" name="vlan23-port" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6728] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51936 uid=0 result="success"
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6743] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 04 10:09:56 compute-0 NetworkManager[49155]: <info>  [1764842996.6744] audit: op="connection-add" uuid="fdd7d824-308c-4d54-bf4e-3d18073b3936" name="br-ex-if" pid=51936 uid=0 result="success"
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0484] audit: op="connection-update" uuid="92b9209e-aa34-525f-93ad-a8f9725aec53" name="ci-private-network" args="connection.slave-type,connection.controller,connection.timestamp,connection.port-type,connection.master,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ipv6.addresses,ipv6.method,ipv6.routing-rules,ovs-external-ids.data,ovs-interface.type,ipv4.never-default,ipv4.dns,ipv4.routes,ipv4.addresses,ipv4.method,ipv4.routing-rules" pid=51936 uid=0 result="success"
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0532] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0535] audit: op="connection-add" uuid="8d789710-985c-461b-8da6-c676763589e9" name="vlan20-if" pid=51936 uid=0 result="success"
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0567] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0570] audit: op="connection-add" uuid="c1a6f5fd-fd90-4bc6-a5cb-00fee3cf0eb8" name="vlan21-if" pid=51936 uid=0 result="success"
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0601] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0604] audit: op="connection-add" uuid="ec1ba5fd-bc71-4544-a1a1-a6126c1edb02" name="vlan22-if" pid=51936 uid=0 result="success"
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0635] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0638] audit: op="connection-add" uuid="c98a02e4-2cf8-4ebc-8cf9-de679df99c1c" name="vlan23-if" pid=51936 uid=0 result="success"
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0664] audit: op="connection-delete" uuid="e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93" name="Wired connection 1" pid=51936 uid=0 result="success"
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0687] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0704] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0712] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (a15c4f20-e55d-495f-8cf8-1789ffb767fc)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0713] audit: op="connection-activate" uuid="a15c4f20-e55d-495f-8cf8-1789ffb767fc" name="br-ex-br" pid=51936 uid=0 result="success"
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0717] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0730] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0736] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (3501e357-6a24-4589-b09a-4e45df7b9f1e)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0740] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0751] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0758] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (f157efaf-88c1-498e-b7be-9797351e9cc5)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0762] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0775] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0782] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (7943381a-aa3c-4448-9073-3da4ad63fbc2)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0785] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0798] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0805] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2a86e840-97a9-4eb1-a01a-66b8ba93f9a7)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0809] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0819] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0826] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (5a561042-8249-41c1-8751-d90cd23df0d5)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0830] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0840] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0847] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (51befe0a-9ddc-4531-a14f-c5a733b7f996)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0849] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0855] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0858] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0869] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0877] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0884] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (fdd7d824-308c-4d54-bf4e-3d18073b3936)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0886] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0892] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0895] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0898] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0900] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0918] device (eth1): disconnecting for new activation request.
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0919] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0925] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0928] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0932] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0937] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0947] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0954] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (8d789710-985c-461b-8da6-c676763589e9)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0955] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0960] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0964] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0967] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0973] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0980] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0987] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (c1a6f5fd-fd90-4bc6-a5cb-00fee3cf0eb8)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0988] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0993] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0997] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.0999] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1005] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1015] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1024] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (ec1ba5fd-bc71-4544-a1a1-a6126c1edb02)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1025] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1031] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1035] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1037] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1044] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1052] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1056] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (c98a02e4-2cf8-4ebc-8cf9-de679df99c1c)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1057] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1059] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1061] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1062] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1063] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1074] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51936 uid=0 result="success"
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1075] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1078] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1079] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1086] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1089] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1103] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1109] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1113] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1123] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 kernel: Timeout policy base is empty
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1132] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1139] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 systemd-udevd[51940]: Network interface NamePolicy= disabled on kernel command line.
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1143] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1153] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1160] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1167] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1170] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1179] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1189] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1197] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1203] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1227] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1235] dhcp4 (eth0): canceled DHCP transaction
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1237] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1239] dhcp4 (eth0): state changed no lease
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1243] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1262] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.1269] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51936 uid=0 result="fail" reason="Device is not activated"
Dec 04 10:09:57 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 04 10:09:57 compute-0 kernel: br-ex: entered promiscuous mode
Dec 04 10:09:57 compute-0 kernel: vlan20: entered promiscuous mode
Dec 04 10:09:57 compute-0 systemd-udevd[51941]: Network interface NamePolicy= disabled on kernel command line.
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.7174] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.7186] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.7210] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.7220] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.7232] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 04 10:09:57 compute-0 NetworkManager[49155]: <info>  [1764842997.7241] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 04 10:09:57 compute-0 kernel: vlan21: entered promiscuous mode
Dec 04 10:09:57 compute-0 kernel: vlan22: entered promiscuous mode
Dec 04 10:09:57 compute-0 systemd-udevd[51942]: Network interface NamePolicy= disabled on kernel command line.
Dec 04 10:09:57 compute-0 kernel: vlan23: entered promiscuous mode
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0830] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0918] device (eth1): Activation: starting connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0922] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0923] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0924] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0925] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0926] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0927] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0928] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0931] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0937] device (eth1): disconnecting for new activation request.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0938] audit: op="connection-activate" uuid="92b9209e-aa34-525f-93ad-a8f9725aec53" name="ci-private-network" pid=51936 uid=0 result="success"
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0941] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0962] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0968] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0974] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0976] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0980] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0990] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0994] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.0997] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1000] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1004] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1007] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1010] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1015] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1018] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1021] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1024] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1028] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1031] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1037] device (eth1): Activation: starting connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1078] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1082] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1089] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1094] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1116] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1125] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1133] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1154] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1157] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1161] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1169] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1179] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1187] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1197] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1202] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1203] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1205] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1207] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1209] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1216] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1222] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1228] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1235] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1241] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1253] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1258] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1266] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1267] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 04 10:09:58 compute-0 NetworkManager[49155]: <info>  [1764842998.1272] device (eth1): Activation: successful, device activated.
Dec 04 10:09:58 compute-0 sudo[52276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcjojxyzpnbcnuhlexyujewifhtrwlwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842997.674539-295-22090865091669/AnsiballZ_async_status.py'
Dec 04 10:09:58 compute-0 sudo[52276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:09:58 compute-0 python3.9[52285]: ansible-ansible.legacy.async_status Invoked with jid=j993236775427.51930 mode=status _async_dir=/root/.ansible_async
Dec 04 10:09:58 compute-0 sudo[52276]: pam_unix(sudo:session): session closed for user root
Dec 04 10:09:59 compute-0 NetworkManager[49155]: <info>  [1764842999.4438] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec 04 10:09:59 compute-0 ansible-async_wrapper.py[51933]: 51934 still running (300)
Dec 04 10:09:59 compute-0 NetworkManager[49155]: <info>  [1764842999.6143] checkpoint[0x55fd32550950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 04 10:09:59 compute-0 NetworkManager[49155]: <info>  [1764842999.6146] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec 04 10:09:59 compute-0 NetworkManager[49155]: <info>  [1764842999.9389] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51936 uid=0 result="success"
Dec 04 10:09:59 compute-0 NetworkManager[49155]: <info>  [1764842999.9400] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51936 uid=0 result="success"
Dec 04 10:10:00 compute-0 NetworkManager[49155]: <info>  [1764843000.8123] audit: op="networking-control" arg="global-dns-configuration" pid=51936 uid=0 result="success"
Dec 04 10:10:01 compute-0 NetworkManager[49155]: <info>  [1764843001.2269] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 04 10:10:01 compute-0 NetworkManager[49155]: <info>  [1764843001.2557] audit: op="networking-control" arg="global-dns-configuration" pid=51936 uid=0 result="success"
Dec 04 10:10:01 compute-0 NetworkManager[49155]: <info>  [1764843001.2584] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51936 uid=0 result="success"
Dec 04 10:10:01 compute-0 NetworkManager[49155]: <info>  [1764843001.4289] checkpoint[0x55fd32550a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 04 10:10:01 compute-0 NetworkManager[49155]: <info>  [1764843001.4293] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51936 uid=0 result="success"
Dec 04 10:10:01 compute-0 ansible-async_wrapper.py[51934]: Module complete (51934)
Dec 04 10:10:01 compute-0 sudo[52403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbarayglhgttkcppteuynkxykvurugpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842997.674539-295-22090865091669/AnsiballZ_async_status.py'
Dec 04 10:10:01 compute-0 sudo[52403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:02 compute-0 python3.9[52405]: ansible-ansible.legacy.async_status Invoked with jid=j993236775427.51930 mode=status _async_dir=/root/.ansible_async
Dec 04 10:10:02 compute-0 sudo[52403]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:02 compute-0 sudo[52502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blvndphwwibpuqgthpdvrosjxbjzqhon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764842997.674539-295-22090865091669/AnsiballZ_async_status.py'
Dec 04 10:10:02 compute-0 sudo[52502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:02 compute-0 python3.9[52504]: ansible-ansible.legacy.async_status Invoked with jid=j993236775427.51930 mode=cleanup _async_dir=/root/.ansible_async
Dec 04 10:10:02 compute-0 sudo[52502]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:02 compute-0 sshd-session[52505]: Invalid user customer from 217.154.62.22 port 46314
Dec 04 10:10:03 compute-0 sshd-session[52505]: Received disconnect from 217.154.62.22 port 46314:11: Bye Bye [preauth]
Dec 04 10:10:03 compute-0 sshd-session[52505]: Disconnected from invalid user customer 217.154.62.22 port 46314 [preauth]
Dec 04 10:10:03 compute-0 sudo[52656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vycnasewqvjrqchtsuperedjbsksxiri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843002.7698042-322-165556734489432/AnsiballZ_stat.py'
Dec 04 10:10:03 compute-0 sudo[52656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:03 compute-0 python3.9[52658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:10:03 compute-0 sudo[52656]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:03 compute-0 sudo[52779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvabwovsdcshkbxxwgtlwvmipfnmkypv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843002.7698042-322-165556734489432/AnsiballZ_copy.py'
Dec 04 10:10:03 compute-0 sudo[52779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:03 compute-0 python3.9[52781]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843002.7698042-322-165556734489432/.source.returncode _original_basename=.dlbnlgt7 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:10:03 compute-0 sudo[52779]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:04 compute-0 sudo[52931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soglxhojrntbcbjwiqrnywowpqdqrgpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843004.035721-338-193265886416780/AnsiballZ_stat.py'
Dec 04 10:10:04 compute-0 sudo[52931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:04 compute-0 ansible-async_wrapper.py[51933]: Done in kid B.
Dec 04 10:10:04 compute-0 python3.9[52933]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:10:04 compute-0 sudo[52931]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:04 compute-0 sudo[53054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prhwijswlyzxvroykzhtzzscmtwxqhbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843004.035721-338-193265886416780/AnsiballZ_copy.py'
Dec 04 10:10:04 compute-0 sudo[53054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:04 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 04 10:10:05 compute-0 python3.9[53056]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843004.035721-338-193265886416780/.source.cfg _original_basename=.9yo7dvgm follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:10:05 compute-0 sudo[53054]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:05 compute-0 sudo[53209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itmssvsznvnzmvbnnjzullgqkycukebi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843005.2019596-353-186534636576644/AnsiballZ_systemd.py'
Dec 04 10:10:05 compute-0 sudo[53209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:05 compute-0 python3.9[53211]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:10:05 compute-0 systemd[1]: Reloading Network Manager...
Dec 04 10:10:05 compute-0 NetworkManager[49155]: <info>  [1764843005.9211] audit: op="reload" arg="0" pid=53215 uid=0 result="success"
Dec 04 10:10:05 compute-0 NetworkManager[49155]: <info>  [1764843005.9221] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 04 10:10:05 compute-0 systemd[1]: Reloaded Network Manager.
Dec 04 10:10:05 compute-0 sudo[53209]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:06 compute-0 sshd-session[45149]: Connection closed by 192.168.122.30 port 39738
Dec 04 10:10:06 compute-0 sshd-session[45146]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:10:06 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 04 10:10:06 compute-0 systemd[1]: session-10.scope: Consumed 53.651s CPU time.
Dec 04 10:10:06 compute-0 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Dec 04 10:10:06 compute-0 systemd-logind[798]: Removed session 10.
Dec 04 10:10:11 compute-0 sshd-session[53246]: Accepted publickey for zuul from 192.168.122.30 port 52684 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:10:11 compute-0 systemd-logind[798]: New session 11 of user zuul.
Dec 04 10:10:11 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 04 10:10:11 compute-0 sshd-session[53246]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:10:12 compute-0 python3.9[53399]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:10:13 compute-0 python3.9[53553]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:10:14 compute-0 python3.9[53747]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:10:15 compute-0 sshd-session[53249]: Connection closed by 192.168.122.30 port 52684
Dec 04 10:10:15 compute-0 sshd-session[53246]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:10:15 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 04 10:10:15 compute-0 systemd[1]: session-11.scope: Consumed 2.433s CPU time.
Dec 04 10:10:15 compute-0 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Dec 04 10:10:15 compute-0 systemd-logind[798]: Removed session 11.
Dec 04 10:10:15 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 04 10:10:20 compute-0 sshd-session[53776]: Accepted publickey for zuul from 192.168.122.30 port 55172 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:10:20 compute-0 systemd-logind[798]: New session 12 of user zuul.
Dec 04 10:10:20 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 04 10:10:20 compute-0 sshd-session[53776]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:10:21 compute-0 python3.9[53930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:10:22 compute-0 python3.9[54084]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:10:23 compute-0 sudo[54238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fozcnslmcsqakiavdpxmodmqcvuqlpkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843022.9135678-40-27976849768883/AnsiballZ_setup.py'
Dec 04 10:10:23 compute-0 sudo[54238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:23 compute-0 python3.9[54240]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:10:23 compute-0 sudo[54238]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:24 compute-0 sudo[54323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqyvsikrzuonikcflvwbpccgczjmsiwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843022.9135678-40-27976849768883/AnsiballZ_dnf.py'
Dec 04 10:10:24 compute-0 sudo[54323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:24 compute-0 python3.9[54325]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:10:25 compute-0 sudo[54323]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:26 compute-0 sudo[54476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geupvjtnxialtfaulkmeojrvbmqpwnjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843025.9463935-52-175031347855180/AnsiballZ_setup.py'
Dec 04 10:10:26 compute-0 sudo[54476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:26 compute-0 python3.9[54478]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:10:26 compute-0 sudo[54476]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:27 compute-0 sudo[54672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phmduhbgywiuqimsgodjibpoveskkjbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843027.065398-63-158522798871321/AnsiballZ_file.py'
Dec 04 10:10:27 compute-0 sudo[54672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:27 compute-0 python3.9[54674]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:10:27 compute-0 sudo[54672]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:28 compute-0 sudo[54824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkcajgkpgcoghgjojbqphyqbqnpjfqyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843027.8484273-71-81114741271998/AnsiballZ_command.py'
Dec 04 10:10:28 compute-0 sudo[54824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:28 compute-0 python3.9[54826]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3173492795-merged.mount: Deactivated successfully.
Dec 04 10:10:28 compute-0 podman[54827]: 2025-12-04 10:10:28.585666882 +0000 UTC m=+0.054907193 system refresh
Dec 04 10:10:28 compute-0 sudo[54824]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:29 compute-0 sshd-session[54866]: Invalid user posiflex from 74.249.218.27 port 34768
Dec 04 10:10:29 compute-0 sshd-session[54866]: Received disconnect from 74.249.218.27 port 34768:11: Bye Bye [preauth]
Dec 04 10:10:29 compute-0 sshd-session[54866]: Disconnected from invalid user posiflex 74.249.218.27 port 34768 [preauth]
Dec 04 10:10:29 compute-0 sudo[54989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ondjygxxpignguhofudmxjggiiwmaedr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843028.7938814-79-159028739508726/AnsiballZ_stat.py'
Dec 04 10:10:29 compute-0 sudo[54989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:29 compute-0 python3.9[54991]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:10:29 compute-0 sudo[54989]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:10:29 compute-0 sudo[55112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxqucwghbrxqcqiqgutecsrexmlfraar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843028.7938814-79-159028739508726/AnsiballZ_copy.py'
Dec 04 10:10:29 compute-0 sudo[55112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:30 compute-0 python3.9[55114]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843028.7938814-79-159028739508726/.source.json follow=False _original_basename=podman_network_config.j2 checksum=c842a32f0e5aeddf216d0e4b41b36c6a0454f7d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:10:30 compute-0 sudo[55112]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:30 compute-0 sudo[55264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrhnnivvhcsenrzfiigrdpmhtxdajxzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843030.3612537-94-192471316061870/AnsiballZ_stat.py'
Dec 04 10:10:30 compute-0 sudo[55264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:30 compute-0 python3.9[55266]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:10:30 compute-0 sudo[55264]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:31 compute-0 sudo[55387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrpdwnlmilbdddmcgnaosdwoicroxqza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843030.3612537-94-192471316061870/AnsiballZ_copy.py'
Dec 04 10:10:31 compute-0 sudo[55387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:31 compute-0 python3.9[55389]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843030.3612537-94-192471316061870/.source.conf follow=False _original_basename=registries.conf.j2 checksum=e054e42fc917865162376c34713b3d5516074d23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:10:31 compute-0 sudo[55387]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:32 compute-0 sudo[55539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvmsvqnawyskqvrzzgkklocckswyrljv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843031.750222-110-89177325127465/AnsiballZ_ini_file.py'
Dec 04 10:10:32 compute-0 sudo[55539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:32 compute-0 python3.9[55541]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:10:32 compute-0 sudo[55539]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:32 compute-0 sudo[55691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpqschsscbwejhfqvqecudzdkzxakont ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843032.5340621-110-249608680229269/AnsiballZ_ini_file.py'
Dec 04 10:10:32 compute-0 sudo[55691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:33 compute-0 python3.9[55693]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:10:33 compute-0 sudo[55691]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:33 compute-0 sudo[55843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gayzcbfybegshdmswwyochhzrlufutja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843033.1816354-110-25595200646013/AnsiballZ_ini_file.py'
Dec 04 10:10:33 compute-0 sudo[55843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:33 compute-0 python3.9[55845]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:10:33 compute-0 sudo[55843]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:34 compute-0 sudo[55995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jifxhgskhecgsfwkmqbhgrkqrpdyadny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843033.808652-110-173016378374879/AnsiballZ_ini_file.py'
Dec 04 10:10:34 compute-0 sudo[55995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:34 compute-0 python3.9[55997]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:10:34 compute-0 sudo[55995]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:34 compute-0 sudo[56147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whwwgqrfyrkqkwyehgtuogmbhcnuiytp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843034.5893438-141-187031068757126/AnsiballZ_dnf.py'
Dec 04 10:10:34 compute-0 sudo[56147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:35 compute-0 python3.9[56149]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:10:36 compute-0 sudo[56147]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:37 compute-0 sudo[56300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdbddqmczxsfyqkmpogwpemrzfkrsmxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843037.0144708-152-71260948058961/AnsiballZ_setup.py'
Dec 04 10:10:37 compute-0 sudo[56300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:37 compute-0 python3.9[56302]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:10:37 compute-0 sudo[56300]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:38 compute-0 sudo[56454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjnhrsjrrjrbdqpkyuqgnsdyabvlvxgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843037.8771603-160-270432757709462/AnsiballZ_stat.py'
Dec 04 10:10:38 compute-0 sudo[56454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:38 compute-0 python3.9[56456]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:10:38 compute-0 sudo[56454]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:38 compute-0 sudo[56606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vozichdkplhnxfmegtiroslayonesvag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843038.5944822-169-246262221202224/AnsiballZ_stat.py'
Dec 04 10:10:38 compute-0 sudo[56606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:39 compute-0 python3.9[56608]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:10:39 compute-0 sudo[56606]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:39 compute-0 sudo[56758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exlidmuhdlbiylfqyilgemjwbhkhkpax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843039.3465526-179-174609920942838/AnsiballZ_command.py'
Dec 04 10:10:39 compute-0 sudo[56758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:39 compute-0 python3.9[56760]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:10:39 compute-0 sudo[56758]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:40 compute-0 sudo[56911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqnozzizutpzegtzkougsmasnznuiiml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843040.1322076-189-46591777065270/AnsiballZ_service_facts.py'
Dec 04 10:10:40 compute-0 sudo[56911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:40 compute-0 python3.9[56913]: ansible-service_facts Invoked
Dec 04 10:10:40 compute-0 network[56930]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 04 10:10:40 compute-0 network[56931]: 'network-scripts' will be removed from distribution in near future.
Dec 04 10:10:40 compute-0 network[56932]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 04 10:10:43 compute-0 sshd-session[57006]: Invalid user azureuser from 107.175.213.239 port 51764
Dec 04 10:10:43 compute-0 sshd-session[57006]: Received disconnect from 107.175.213.239 port 51764:11: Bye Bye [preauth]
Dec 04 10:10:43 compute-0 sshd-session[57006]: Disconnected from invalid user azureuser 107.175.213.239 port 51764 [preauth]
Dec 04 10:10:44 compute-0 sudo[56911]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:44 compute-0 sudo[57217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyrmqkfyrbeigjdgknqsnycecjpmjedb ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764843044.576766-204-24722512296376/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764843044.576766-204-24722512296376/args'
Dec 04 10:10:44 compute-0 sudo[57217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:44 compute-0 sudo[57217]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:45 compute-0 sudo[57384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmuatqmgyukktpovobzscipcbesgwmqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843045.2247653-215-30032259448612/AnsiballZ_dnf.py'
Dec 04 10:10:45 compute-0 sudo[57384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:45 compute-0 python3.9[57386]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:10:46 compute-0 sudo[57384]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:47 compute-0 sshd-session[57388]: Invalid user oracle from 103.149.86.230 port 37012
Dec 04 10:10:47 compute-0 sudo[57539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwqhtdjshkbkfggnuvuncbdhxfddpqja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843047.315028-228-72947894395062/AnsiballZ_package_facts.py'
Dec 04 10:10:47 compute-0 sudo[57539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:47 compute-0 sshd-session[57388]: Received disconnect from 103.149.86.230 port 37012:11: Bye Bye [preauth]
Dec 04 10:10:47 compute-0 sshd-session[57388]: Disconnected from invalid user oracle 103.149.86.230 port 37012 [preauth]
Dec 04 10:10:48 compute-0 python3.9[57541]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 04 10:10:48 compute-0 sudo[57539]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:49 compute-0 sudo[57691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkqwhygchnlwuqpoksjsvyzcjtzqdlyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843048.8365653-238-103495330234340/AnsiballZ_stat.py'
Dec 04 10:10:49 compute-0 sudo[57691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:49 compute-0 python3.9[57693]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:10:49 compute-0 sudo[57691]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:49 compute-0 sudo[57816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onkuipalrczfvkficzwudamaqjouyjjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843048.8365653-238-103495330234340/AnsiballZ_copy.py'
Dec 04 10:10:49 compute-0 sudo[57816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:49 compute-0 python3.9[57818]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843048.8365653-238-103495330234340/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:10:50 compute-0 sudo[57816]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:50 compute-0 sudo[57970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsryvawfqhudzdewtxyrkiybsybyunhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843050.2730312-253-18079187276601/AnsiballZ_stat.py'
Dec 04 10:10:50 compute-0 sudo[57970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:50 compute-0 python3.9[57972]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:10:50 compute-0 sudo[57970]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:51 compute-0 sudo[58095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxzcawvrpaosepjtofdlxqdxtihzsyhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843050.2730312-253-18079187276601/AnsiballZ_copy.py'
Dec 04 10:10:51 compute-0 sudo[58095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:51 compute-0 python3.9[58097]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843050.2730312-253-18079187276601/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:10:51 compute-0 sudo[58095]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:52 compute-0 sudo[58249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlnfuhwjsjeesbyerglxjvqcpiprrpur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843051.8390954-274-11435373077493/AnsiballZ_lineinfile.py'
Dec 04 10:10:52 compute-0 sudo[58249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:52 compute-0 python3.9[58251]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:10:52 compute-0 sudo[58249]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:53 compute-0 sudo[58403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzxfebibtamymsxwoawuvtylslgqbcoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843053.1066377-289-164038899074224/AnsiballZ_setup.py'
Dec 04 10:10:53 compute-0 sudo[58403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:53 compute-0 python3.9[58405]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:10:53 compute-0 sudo[58403]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:54 compute-0 sudo[58487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usklqvbiogzenvaijubtqoxiglsrnnap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843053.1066377-289-164038899074224/AnsiballZ_systemd.py'
Dec 04 10:10:54 compute-0 sudo[58487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:54 compute-0 python3.9[58489]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:10:54 compute-0 sudo[58487]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:55 compute-0 sudo[58641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoffdbspeppxgrvypazvqrjrgrzmddhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843055.4250588-305-58227101610275/AnsiballZ_setup.py'
Dec 04 10:10:55 compute-0 sudo[58641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:56 compute-0 python3.9[58643]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:10:56 compute-0 sudo[58641]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:56 compute-0 sudo[58725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rodkiqowqbzcfdzggslamrwvuhmisidl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843055.4250588-305-58227101610275/AnsiballZ_systemd.py'
Dec 04 10:10:56 compute-0 sudo[58725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:10:56 compute-0 python3.9[58727]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:10:56 compute-0 chronyd[791]: chronyd exiting
Dec 04 10:10:56 compute-0 systemd[1]: Stopping NTP client/server...
Dec 04 10:10:56 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 04 10:10:56 compute-0 systemd[1]: Stopped NTP client/server.
Dec 04 10:10:56 compute-0 systemd[1]: Starting NTP client/server...
Dec 04 10:10:56 compute-0 chronyd[58735]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 04 10:10:56 compute-0 chronyd[58735]: Frequency -23.612 +/- 0.226 ppm read from /var/lib/chrony/drift
Dec 04 10:10:56 compute-0 chronyd[58735]: Loaded seccomp filter (level 2)
Dec 04 10:10:56 compute-0 systemd[1]: Started NTP client/server.
Dec 04 10:10:56 compute-0 sudo[58725]: pam_unix(sudo:session): session closed for user root
Dec 04 10:10:57 compute-0 sshd-session[53779]: Connection closed by 192.168.122.30 port 55172
Dec 04 10:10:57 compute-0 sshd-session[53776]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:10:57 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 04 10:10:57 compute-0 systemd[1]: session-12.scope: Consumed 26.891s CPU time.
Dec 04 10:10:57 compute-0 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Dec 04 10:10:57 compute-0 systemd-logind[798]: Removed session 12.
Dec 04 10:11:03 compute-0 sshd-session[58761]: Accepted publickey for zuul from 192.168.122.30 port 41812 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:11:03 compute-0 systemd-logind[798]: New session 13 of user zuul.
Dec 04 10:11:03 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 04 10:11:03 compute-0 sshd-session[58761]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:11:03 compute-0 sudo[58914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afffspvhcwhcinubfdsvbmmknnbrqcdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843063.1448216-22-88118478014663/AnsiballZ_file.py'
Dec 04 10:11:03 compute-0 sudo[58914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:03 compute-0 python3.9[58916]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:03 compute-0 sudo[58914]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:04 compute-0 sudo[59066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzfsirstrqzftcnipaxlhmxstlkgzgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843064.0934138-34-200053414231995/AnsiballZ_stat.py'
Dec 04 10:11:04 compute-0 sudo[59066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:04 compute-0 python3.9[59068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:04 compute-0 sudo[59066]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:05 compute-0 sudo[59189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnlziigpgmreisjxrvurnopliffbvqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843064.0934138-34-200053414231995/AnsiballZ_copy.py'
Dec 04 10:11:05 compute-0 sudo[59189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:05 compute-0 python3.9[59191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843064.0934138-34-200053414231995/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:05 compute-0 sudo[59189]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:05 compute-0 sshd-session[58764]: Connection closed by 192.168.122.30 port 41812
Dec 04 10:11:05 compute-0 sshd-session[58761]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:11:05 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 04 10:11:05 compute-0 systemd[1]: session-13.scope: Consumed 1.858s CPU time.
Dec 04 10:11:05 compute-0 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Dec 04 10:11:05 compute-0 systemd-logind[798]: Removed session 13.
Dec 04 10:11:08 compute-0 sshd-session[59216]: Received disconnect from 103.179.218.243 port 40834:11: Bye Bye [preauth]
Dec 04 10:11:08 compute-0 sshd-session[59216]: Disconnected from authenticating user root 103.179.218.243 port 40834 [preauth]
Dec 04 10:11:11 compute-0 sshd-session[59218]: Accepted publickey for zuul from 192.168.122.30 port 41936 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:11:11 compute-0 systemd-logind[798]: New session 14 of user zuul.
Dec 04 10:11:11 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 04 10:11:11 compute-0 sshd-session[59218]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:11:12 compute-0 python3.9[59371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:11:13 compute-0 sudo[59525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zejvehtwrettgksykzpbryqltrcvzmgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843072.6556268-33-22408226809626/AnsiballZ_file.py'
Dec 04 10:11:13 compute-0 sudo[59525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:13 compute-0 python3.9[59527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:13 compute-0 sudo[59525]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:14 compute-0 sudo[59700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfvteqbbhswkskgpcfzouvgpwlmrjtgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843073.536437-41-137303953496244/AnsiballZ_stat.py'
Dec 04 10:11:14 compute-0 sudo[59700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:14 compute-0 python3.9[59702]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:14 compute-0 sudo[59700]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:14 compute-0 sudo[59823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqxqgyurpazpjqkhtayffaarbexbwozn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843073.536437-41-137303953496244/AnsiballZ_copy.py'
Dec 04 10:11:14 compute-0 sudo[59823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:15 compute-0 python3.9[59825]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764843073.536437-41-137303953496244/.source.json _original_basename=.mw3z_fpc follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:15 compute-0 sudo[59823]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:15 compute-0 sudo[59975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxamdcfpvpvibwscexizdtiljadtwheb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843075.4072232-64-230044537283121/AnsiballZ_stat.py'
Dec 04 10:11:15 compute-0 sudo[59975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:15 compute-0 python3.9[59977]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:15 compute-0 sudo[59975]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:16 compute-0 sudo[60098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epowbqjwoleezyvfkstgjtaoamuzatgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843075.4072232-64-230044537283121/AnsiballZ_copy.py'
Dec 04 10:11:16 compute-0 sudo[60098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:16 compute-0 python3.9[60100]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843075.4072232-64-230044537283121/.source _original_basename=.xjul2cfx follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:16 compute-0 sudo[60098]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:16 compute-0 sudo[60250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kysqtzrmbesubbpqvwuskihwzhdwsdcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843076.6745362-80-43316224958014/AnsiballZ_file.py'
Dec 04 10:11:16 compute-0 sudo[60250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:17 compute-0 python3.9[60252]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:11:17 compute-0 sudo[60250]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:17 compute-0 sudo[60402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcijouljvdaimbsmqsqxlwmcviximyty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843077.2852218-88-257198964038621/AnsiballZ_stat.py'
Dec 04 10:11:17 compute-0 sudo[60402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:17 compute-0 python3.9[60404]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:17 compute-0 sudo[60402]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:18 compute-0 sudo[60525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvaaxaayfctqyeijpjvsbfndnkdysooq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843077.2852218-88-257198964038621/AnsiballZ_copy.py'
Dec 04 10:11:18 compute-0 sudo[60525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:18 compute-0 python3.9[60527]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843077.2852218-88-257198964038621/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:11:18 compute-0 sudo[60525]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:18 compute-0 sudo[60677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxoxrqkxushscyzfjyqvigbzntgzjjxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843078.5008197-88-43318723721941/AnsiballZ_stat.py'
Dec 04 10:11:18 compute-0 sudo[60677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:18 compute-0 python3.9[60679]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:19 compute-0 sudo[60677]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:19 compute-0 sudo[60800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euidwylxjcalasrlzwcigtdnhwfimbps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843078.5008197-88-43318723721941/AnsiballZ_copy.py'
Dec 04 10:11:19 compute-0 sudo[60800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:19 compute-0 python3.9[60802]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843078.5008197-88-43318723721941/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:11:19 compute-0 sudo[60800]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:20 compute-0 sudo[60952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akfieixpdyfygspjshiehrtqbhqaocih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843079.8226485-117-241738510230398/AnsiballZ_file.py'
Dec 04 10:11:20 compute-0 sudo[60952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:20 compute-0 python3.9[60954]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:20 compute-0 sudo[60952]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:20 compute-0 sudo[61104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyvbswuxlvrvytijzksdhntroxthpyww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843080.480024-125-43068505848326/AnsiballZ_stat.py'
Dec 04 10:11:20 compute-0 sudo[61104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:20 compute-0 python3.9[61106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:20 compute-0 sudo[61104]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:21 compute-0 sudo[61227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzurlhqjdkxzdicpplghrgaoewsmtfry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843080.480024-125-43068505848326/AnsiballZ_copy.py'
Dec 04 10:11:21 compute-0 sudo[61227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:21 compute-0 python3.9[61229]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843080.480024-125-43068505848326/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:21 compute-0 sudo[61227]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:22 compute-0 sudo[61379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvtyhkufdliogurhtzmvuuqhluicpuqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843081.7465773-140-53644912381580/AnsiballZ_stat.py'
Dec 04 10:11:22 compute-0 sudo[61379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:22 compute-0 python3.9[61381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:22 compute-0 sudo[61379]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:22 compute-0 sudo[61502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eotfriizeydwizoxzbrgfbolecjdpfog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843081.7465773-140-53644912381580/AnsiballZ_copy.py'
Dec 04 10:11:22 compute-0 sudo[61502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:22 compute-0 python3.9[61504]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843081.7465773-140-53644912381580/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:22 compute-0 sudo[61502]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:23 compute-0 sudo[61654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xexfbfaykjebkaslpwqqivqzxdkuenmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843083.0504181-155-77221216938285/AnsiballZ_systemd.py'
Dec 04 10:11:23 compute-0 sudo[61654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:23 compute-0 python3.9[61656]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:11:23 compute-0 systemd[1]: Reloading.
Dec 04 10:11:24 compute-0 systemd-rc-local-generator[61682]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:11:24 compute-0 systemd-sysv-generator[61687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:11:24 compute-0 systemd[1]: Reloading.
Dec 04 10:11:24 compute-0 sshd[1008]: Timeout before authentication for connection from 219.144.16.16 to 38.102.83.169, pid = 47772
Dec 04 10:11:24 compute-0 systemd-rc-local-generator[61717]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:11:24 compute-0 systemd-sysv-generator[61720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:11:24 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 04 10:11:24 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 04 10:11:24 compute-0 sudo[61654]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:25 compute-0 sudo[61881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acvpawvyhvuampmkrpmugrytcpfyjwkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843084.7788312-163-191154483260111/AnsiballZ_stat.py'
Dec 04 10:11:25 compute-0 sudo[61881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:25 compute-0 python3.9[61883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:25 compute-0 sudo[61881]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:25 compute-0 sudo[62004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwaupgdszypobxoakhvtgsnmcywgpvpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843084.7788312-163-191154483260111/AnsiballZ_copy.py'
Dec 04 10:11:25 compute-0 sudo[62004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:25 compute-0 python3.9[62006]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843084.7788312-163-191154483260111/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:25 compute-0 sudo[62004]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:26 compute-0 sudo[62156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkgrmfvycfaxxlmwoexanchzcjahkjlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843085.9530258-178-185672400826386/AnsiballZ_stat.py'
Dec 04 10:11:26 compute-0 sudo[62156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:26 compute-0 python3.9[62158]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:26 compute-0 sudo[62156]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:26 compute-0 sudo[62281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fakaoqwdfvmthzouynpxuandljoydfpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843085.9530258-178-185672400826386/AnsiballZ_copy.py'
Dec 04 10:11:26 compute-0 sudo[62281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:26 compute-0 python3.9[62283]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843085.9530258-178-185672400826386/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:26 compute-0 sudo[62281]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:27 compute-0 sshd-session[62253]: Invalid user terraria from 217.154.62.22 port 45678
Dec 04 10:11:27 compute-0 sshd-session[62253]: Received disconnect from 217.154.62.22 port 45678:11: Bye Bye [preauth]
Dec 04 10:11:27 compute-0 sshd-session[62253]: Disconnected from invalid user terraria 217.154.62.22 port 45678 [preauth]
Dec 04 10:11:27 compute-0 sudo[62433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwxdppirgrhloxxiwpyxhzorhtomexai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843087.062958-193-166527002516583/AnsiballZ_systemd.py'
Dec 04 10:11:27 compute-0 sudo[62433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:27 compute-0 python3.9[62435]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:11:27 compute-0 systemd[1]: Reloading.
Dec 04 10:11:27 compute-0 systemd-rc-local-generator[62463]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:11:27 compute-0 systemd-sysv-generator[62466]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:11:27 compute-0 systemd[1]: Reloading.
Dec 04 10:11:27 compute-0 systemd-sysv-generator[62508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:11:28 compute-0 systemd-rc-local-generator[62504]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:11:28 compute-0 systemd[1]: Starting Create netns directory...
Dec 04 10:11:28 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 04 10:11:28 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 04 10:11:28 compute-0 systemd[1]: Finished Create netns directory.
Dec 04 10:11:28 compute-0 sudo[62433]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:28 compute-0 python3.9[62663]: ansible-ansible.builtin.service_facts Invoked
Dec 04 10:11:28 compute-0 network[62680]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 04 10:11:28 compute-0 network[62681]: 'network-scripts' will be removed from distribution in near future.
Dec 04 10:11:28 compute-0 network[62682]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 04 10:11:33 compute-0 sudo[62942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjcvfnxgshuugbqgtpkkswcqkatvouef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843092.7904286-209-148130919612039/AnsiballZ_systemd.py'
Dec 04 10:11:33 compute-0 sudo[62942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:33 compute-0 python3.9[62944]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:11:33 compute-0 systemd[1]: Reloading.
Dec 04 10:11:33 compute-0 systemd-rc-local-generator[62973]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:11:33 compute-0 systemd-sysv-generator[62976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:11:33 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 04 10:11:33 compute-0 iptables.init[62983]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 04 10:11:34 compute-0 iptables.init[62983]: iptables: Flushing firewall rules: [  OK  ]
Dec 04 10:11:34 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 04 10:11:34 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 04 10:11:34 compute-0 sudo[62942]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:34 compute-0 sudo[63177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jduikihfaoosoglbznvnwvxyppzpoyre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843094.2415857-209-181720708125580/AnsiballZ_systemd.py'
Dec 04 10:11:34 compute-0 sudo[63177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:34 compute-0 python3.9[63179]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:11:34 compute-0 sudo[63177]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:35 compute-0 sudo[63331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-valktountmgiirxhbrvswbttmljtitmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843095.1441185-225-68811089604384/AnsiballZ_systemd.py'
Dec 04 10:11:35 compute-0 sudo[63331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:35 compute-0 python3.9[63333]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:11:35 compute-0 systemd[1]: Reloading.
Dec 04 10:11:35 compute-0 systemd-sysv-generator[63365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:11:35 compute-0 systemd-rc-local-generator[63362]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:11:36 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 04 10:11:36 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 04 10:11:36 compute-0 sudo[63331]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:36 compute-0 sudo[63522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxfxgjdhaazonucvteqzeswjxupspkef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843096.3313117-233-139855467015077/AnsiballZ_command.py'
Dec 04 10:11:36 compute-0 sudo[63522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:36 compute-0 python3.9[63524]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:11:37 compute-0 sudo[63522]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:37 compute-0 sudo[63675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rymlxjenupjgparlwxufoqyhuhoukzaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843097.3903086-247-100911318175779/AnsiballZ_stat.py'
Dec 04 10:11:37 compute-0 sudo[63675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:37 compute-0 python3.9[63677]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:37 compute-0 sudo[63675]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:38 compute-0 sudo[63800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxnrrsbkexylxapbcwyzvqfispvfkuqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843097.3903086-247-100911318175779/AnsiballZ_copy.py'
Dec 04 10:11:38 compute-0 sudo[63800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:38 compute-0 python3.9[63802]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843097.3903086-247-100911318175779/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:38 compute-0 sudo[63800]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:38 compute-0 sudo[63953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjktmqgjeatburbnsmvsevupcayctie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843098.6329632-262-236307144093545/AnsiballZ_systemd.py'
Dec 04 10:11:38 compute-0 sudo[63953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:39 compute-0 python3.9[63955]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:11:39 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 04 10:11:39 compute-0 sshd[1008]: Received SIGHUP; restarting.
Dec 04 10:11:39 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Dec 04 10:11:39 compute-0 sshd[1008]: Server listening on :: port 22.
Dec 04 10:11:39 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 04 10:11:39 compute-0 sudo[63953]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:39 compute-0 sudo[64109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daupaxotjosnirbcvtqpaslywyrhiwue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843099.5144558-270-162690484079754/AnsiballZ_file.py'
Dec 04 10:11:39 compute-0 sudo[64109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:40 compute-0 python3.9[64111]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:40 compute-0 sudo[64109]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:40 compute-0 sudo[64263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vijzkxtulhbnmmxpeopcovdecufllzrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843100.1752677-278-139040778712147/AnsiballZ_stat.py'
Dec 04 10:11:40 compute-0 sudo[64263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:40 compute-0 python3.9[64265]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:40 compute-0 sudo[64263]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:40 compute-0 sshd-session[64266]: Invalid user supermaint from 74.249.218.27 port 43776
Dec 04 10:11:40 compute-0 sshd-session[64266]: Received disconnect from 74.249.218.27 port 43776:11: Bye Bye [preauth]
Dec 04 10:11:40 compute-0 sshd-session[64266]: Disconnected from invalid user supermaint 74.249.218.27 port 43776 [preauth]
Dec 04 10:11:41 compute-0 sudo[64388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzpomqrsnkfafrksdcshzvbuiucmnznf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843100.1752677-278-139040778712147/AnsiballZ_copy.py'
Dec 04 10:11:41 compute-0 sudo[64388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:41 compute-0 python3.9[64390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843100.1752677-278-139040778712147/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:41 compute-0 sudo[64388]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:42 compute-0 sudo[64540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuvuskyezqqxvntcnypjxnyadrepkmrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843101.7680857-296-256337607812344/AnsiballZ_timezone.py'
Dec 04 10:11:42 compute-0 sudo[64540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:42 compute-0 python3.9[64542]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 04 10:11:42 compute-0 systemd[1]: Starting Time & Date Service...
Dec 04 10:11:42 compute-0 systemd[1]: Started Time & Date Service.
Dec 04 10:11:42 compute-0 sudo[64540]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:42 compute-0 sshd-session[64223]: Invalid user frontend from 101.47.163.20 port 37616
Dec 04 10:11:43 compute-0 sshd-session[64223]: Received disconnect from 101.47.163.20 port 37616:11: Bye Bye [preauth]
Dec 04 10:11:43 compute-0 sshd-session[64223]: Disconnected from invalid user frontend 101.47.163.20 port 37616 [preauth]
Dec 04 10:11:43 compute-0 sudo[64696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwjhawiqnumargfaefilpyddazwwrncc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843103.6613054-305-253597343092550/AnsiballZ_file.py'
Dec 04 10:11:43 compute-0 sudo[64696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:44 compute-0 python3.9[64698]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:44 compute-0 sudo[64696]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:44 compute-0 sudo[64848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlrtequmntzoyiwrjqejixmxiwsoveb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843104.3732488-313-233308952657070/AnsiballZ_stat.py'
Dec 04 10:11:44 compute-0 sudo[64848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:44 compute-0 python3.9[64850]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:44 compute-0 sudo[64848]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:45 compute-0 sudo[64971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuqpxmkebmdwggvkkcbsqbeexoviwsxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843104.3732488-313-233308952657070/AnsiballZ_copy.py'
Dec 04 10:11:45 compute-0 sudo[64971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:45 compute-0 python3.9[64973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843104.3732488-313-233308952657070/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:45 compute-0 sudo[64971]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:45 compute-0 sudo[65123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpinkoajbxzdrvwkoveimqptnivxtozw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843105.6710856-328-239693436633773/AnsiballZ_stat.py'
Dec 04 10:11:45 compute-0 sudo[65123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:46 compute-0 python3.9[65125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:46 compute-0 sudo[65123]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:46 compute-0 sudo[65246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pveocjdpyectsrbgikrfvvdqlooziiwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843105.6710856-328-239693436633773/AnsiballZ_copy.py'
Dec 04 10:11:46 compute-0 sudo[65246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:46 compute-0 python3.9[65248]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843105.6710856-328-239693436633773/.source.yaml _original_basename=.37rfxh0_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:46 compute-0 sudo[65246]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:47 compute-0 sudo[65398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeydckovdnlxhozycphmtgszcvimchwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843106.9623334-343-237064595834742/AnsiballZ_stat.py'
Dec 04 10:11:47 compute-0 sudo[65398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:47 compute-0 python3.9[65400]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:47 compute-0 sudo[65398]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:47 compute-0 sudo[65521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkirhfsywgjqqzzappfuqttchxzxwlxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843106.9623334-343-237064595834742/AnsiballZ_copy.py'
Dec 04 10:11:47 compute-0 sudo[65521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:48 compute-0 python3.9[65523]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843106.9623334-343-237064595834742/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:48 compute-0 sudo[65521]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:48 compute-0 sudo[65673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzszzkqnkwneifqhdjlazplubuupzzdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843108.2828627-358-268201932431827/AnsiballZ_command.py'
Dec 04 10:11:48 compute-0 sudo[65673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:48 compute-0 python3.9[65675]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:11:48 compute-0 sudo[65673]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:49 compute-0 sudo[65826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypzhxczwncxgrhmyrjkbbqfxpxtrauwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843109.0202212-366-241398221936406/AnsiballZ_command.py'
Dec 04 10:11:49 compute-0 sudo[65826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:49 compute-0 python3.9[65828]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:11:49 compute-0 sudo[65826]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:50 compute-0 sudo[65979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srlwdeltejjsyuoghnpugxfasmliypeh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764843109.7909408-374-97373620949020/AnsiballZ_edpm_nftables_from_files.py'
Dec 04 10:11:50 compute-0 sudo[65979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:50 compute-0 python3[65981]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 04 10:11:50 compute-0 sudo[65979]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:51 compute-0 sudo[66131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzniiwyxsgpmsdqeeivqjorpoajcdigb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843110.6663737-382-239936606705847/AnsiballZ_stat.py'
Dec 04 10:11:51 compute-0 sudo[66131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:51 compute-0 python3.9[66133]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:51 compute-0 sudo[66131]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:51 compute-0 sudo[66254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apmocwvshmdhclhbxunxbwnuijnqygrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843110.6663737-382-239936606705847/AnsiballZ_copy.py'
Dec 04 10:11:51 compute-0 sudo[66254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:51 compute-0 python3.9[66256]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843110.6663737-382-239936606705847/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:51 compute-0 sudo[66254]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:52 compute-0 sudo[66406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owvjwzmgaguirzopkulorfvzgntmwvlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843111.9639623-397-185575031110288/AnsiballZ_stat.py'
Dec 04 10:11:52 compute-0 sudo[66406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:52 compute-0 python3.9[66408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:52 compute-0 sudo[66406]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:52 compute-0 sudo[66529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbhdgrvxkckfrxwzhddhuckvhghsirbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843111.9639623-397-185575031110288/AnsiballZ_copy.py'
Dec 04 10:11:52 compute-0 sudo[66529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:52 compute-0 python3.9[66531]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843111.9639623-397-185575031110288/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:53 compute-0 sudo[66529]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:53 compute-0 sudo[66681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kficzozqgxsadcepymfjehtsvcpcbdsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843113.185842-412-64939927306543/AnsiballZ_stat.py'
Dec 04 10:11:53 compute-0 sudo[66681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:53 compute-0 python3.9[66683]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:53 compute-0 sudo[66681]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:54 compute-0 sudo[66804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hioqpquqcyofnjmwffxezmapkxxckbsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843113.185842-412-64939927306543/AnsiballZ_copy.py'
Dec 04 10:11:54 compute-0 sudo[66804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:54 compute-0 python3.9[66806]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843113.185842-412-64939927306543/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:54 compute-0 sudo[66804]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:54 compute-0 sudo[66956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyxtzcbxtikaycriupkgtsagliksotnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843114.4844103-427-33259816022164/AnsiballZ_stat.py'
Dec 04 10:11:54 compute-0 sudo[66956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:54 compute-0 python3.9[66958]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:55 compute-0 sudo[66956]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:55 compute-0 sudo[67079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuypraizjeqjubdyuvumtrmyfzcjfuck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843114.4844103-427-33259816022164/AnsiballZ_copy.py'
Dec 04 10:11:55 compute-0 sudo[67079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:55 compute-0 python3.9[67081]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843114.4844103-427-33259816022164/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:55 compute-0 sudo[67079]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:56 compute-0 sudo[67231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnsspglxsfcmzyxnrsgbedtghygyznln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843115.7152786-442-227082775057702/AnsiballZ_stat.py'
Dec 04 10:11:56 compute-0 sudo[67231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:56 compute-0 python3.9[67233]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:11:56 compute-0 sudo[67231]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:56 compute-0 sudo[67354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylnruglonsygbjslsjscxxnzkaeyivag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843115.7152786-442-227082775057702/AnsiballZ_copy.py'
Dec 04 10:11:56 compute-0 sudo[67354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:56 compute-0 python3.9[67356]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843115.7152786-442-227082775057702/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:56 compute-0 sudo[67354]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:57 compute-0 sudo[67506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxnsjnlzgtucpzjepoyxwwbmkscaaazn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843116.949014-457-61439486970025/AnsiballZ_file.py'
Dec 04 10:11:57 compute-0 sudo[67506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:57 compute-0 python3.9[67508]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:57 compute-0 sudo[67506]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:58 compute-0 sudo[67658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smbcvrslvhwwthxdxxqhkgghvhigwiiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843117.726086-465-227797426825371/AnsiballZ_command.py'
Dec 04 10:11:58 compute-0 sudo[67658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:58 compute-0 python3.9[67660]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:11:58 compute-0 sudo[67658]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:59 compute-0 sudo[67817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgazyljfpmddrnznewijuffgyrijzjdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843118.4940884-473-169340871781112/AnsiballZ_blockinfile.py'
Dec 04 10:11:59 compute-0 sudo[67817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:59 compute-0 python3.9[67819]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:59 compute-0 sudo[67817]: pam_unix(sudo:session): session closed for user root
Dec 04 10:11:59 compute-0 sudo[67970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ystpbwlojijbelnvsoqkzszygpouicuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843119.4895413-482-40729056488157/AnsiballZ_file.py'
Dec 04 10:11:59 compute-0 sudo[67970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:11:59 compute-0 python3.9[67972]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:11:59 compute-0 sudo[67970]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:00 compute-0 sudo[68122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orkmncnhfnyqymkqnowhbeufhihrnklq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843120.1205466-482-248657570950701/AnsiballZ_file.py'
Dec 04 10:12:00 compute-0 sudo[68122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:00 compute-0 python3.9[68124]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:12:00 compute-0 sudo[68122]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:01 compute-0 sudo[68274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuetlwhmwdrrtdzhkbarqbpbzzcjrvdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843120.7535288-497-254830855082684/AnsiballZ_mount.py'
Dec 04 10:12:01 compute-0 sudo[68274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:01 compute-0 python3.9[68276]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 04 10:12:01 compute-0 sudo[68274]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:01 compute-0 sudo[68427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnnuqphqeujcqixencvswlheuapvscrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843121.6936467-497-172452316011341/AnsiballZ_mount.py'
Dec 04 10:12:01 compute-0 sudo[68427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:02 compute-0 python3.9[68429]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 04 10:12:02 compute-0 sudo[68427]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:02 compute-0 sshd-session[59221]: Connection closed by 192.168.122.30 port 41936
Dec 04 10:12:02 compute-0 sshd-session[59218]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:12:02 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 04 10:12:02 compute-0 systemd[1]: session-14.scope: Consumed 38.089s CPU time.
Dec 04 10:12:02 compute-0 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Dec 04 10:12:02 compute-0 systemd-logind[798]: Removed session 14.
Dec 04 10:12:05 compute-0 sshd-session[68455]: Received disconnect from 103.149.86.230 port 57482:11: Bye Bye [preauth]
Dec 04 10:12:05 compute-0 sshd-session[68455]: Disconnected from authenticating user root 103.149.86.230 port 57482 [preauth]
Dec 04 10:12:07 compute-0 sshd-session[68458]: Accepted publickey for zuul from 192.168.122.30 port 43856 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:12:07 compute-0 systemd-logind[798]: New session 15 of user zuul.
Dec 04 10:12:07 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 04 10:12:07 compute-0 sshd-session[68458]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:12:08 compute-0 sudo[68611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lteuashhxxmusschljvsefwuuzgxxzgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843127.8615322-16-243044448399526/AnsiballZ_tempfile.py'
Dec 04 10:12:08 compute-0 sudo[68611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:08 compute-0 python3.9[68613]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 04 10:12:08 compute-0 sudo[68611]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:09 compute-0 sudo[68763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owqseapjbxermhnfmywftqezisfqxxas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843128.7179005-28-239289907082976/AnsiballZ_stat.py'
Dec 04 10:12:09 compute-0 sudo[68763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:09 compute-0 python3.9[68765]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:12:09 compute-0 sudo[68763]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:10 compute-0 sudo[68915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdbgfvlvebnywrzkqqgfdpzseigfgtiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843129.5908337-38-43027133741262/AnsiballZ_setup.py'
Dec 04 10:12:10 compute-0 sudo[68915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:10 compute-0 python3.9[68917]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:12:10 compute-0 sudo[68915]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:11 compute-0 sudo[69067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlowtoutvxymphlfheovkgyzlxxbrwnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843130.7811995-47-143216174410511/AnsiballZ_blockinfile.py'
Dec 04 10:12:11 compute-0 sudo[69067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:11 compute-0 python3.9[69069]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBaDrGsfyH66GeTPneOf4P9cqhJJcxgP3bu0E7RAjEstx4o7NevlnfodrpsWI3GhJ5z8ru5yYrnT8gj6K/RfM5zjWXW+Ul4lDWJ1UnIBsqOM+qHdwpyOanGFwsD1SStOqDLQRPhop1d9LdePkBXvJSXJ80Mpcjwm1bfGwN/fJl8zLFWskfkIYThTGAzthtkHNPXQXTBX+VOKpcthU/qN5CP8Y/w/9w96vwq/0dHExjueOOk28BTWEQCwxPpkb1Wrd6hQ3KYnZye2JOZh3qqNaX44hPg8VLhv3agVerNv6vRiI2EbdHHYD2I5gXfV7bQGhRzhpFEZm2DfYLr5b8H1kG9ocx3KHW2+TctXCO2hCdJhjjuQQb033in90uXPuMsEEvmtCnc5vbJ5DKpgiaJysNZhmTkpKiJ4UVa6HeBh3riio7zeHc3bjI/1AD1cejpy6OEoWwk/X8ydA6bau1ApGvoHoEAXhlES4J/a6CUovnch+uMkircx8hJcYthuNhJIk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBhSkNncUNzxmzyjy22XSoHmC2WfRWk9PEzKRLlibq2
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBeg0yEcOxT9ax0vZC/VGcWoLt2isE/U7UTL1uRpP8q51Um5h2uaP4tcFVGL1g6uXlC20O3SCTRskwpUg5sj6I=
                                             create=True mode=0644 path=/tmp/ansible.o_vmo2hl state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:12:11 compute-0 sudo[69067]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:12 compute-0 sudo[69219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toeflloslcfaiwgcxxytfakgykjcjiuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843131.6463015-55-126454449708973/AnsiballZ_command.py'
Dec 04 10:12:12 compute-0 sudo[69219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:12 compute-0 python3.9[69221]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.o_vmo2hl' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:12:12 compute-0 sudo[69219]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:12 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 04 10:12:13 compute-0 sudo[69376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtkievzlomlxsgqvvyqkvlotctmajnas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843132.6318839-63-242313275782617/AnsiballZ_file.py'
Dec 04 10:12:13 compute-0 sudo[69376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:13 compute-0 python3.9[69378]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.o_vmo2hl state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:12:13 compute-0 sudo[69376]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:13 compute-0 sshd-session[68461]: Connection closed by 192.168.122.30 port 43856
Dec 04 10:12:13 compute-0 sshd-session[68458]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:12:13 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 04 10:12:13 compute-0 systemd[1]: session-15.scope: Consumed 3.806s CPU time.
Dec 04 10:12:13 compute-0 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Dec 04 10:12:13 compute-0 systemd-logind[798]: Removed session 15.
Dec 04 10:12:19 compute-0 sshd-session[69404]: Accepted publickey for zuul from 192.168.122.30 port 46002 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:12:19 compute-0 systemd-logind[798]: New session 16 of user zuul.
Dec 04 10:12:19 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 04 10:12:19 compute-0 sshd-session[69404]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:12:20 compute-0 python3.9[69559]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:12:21 compute-0 sshd-session[69460]: Received disconnect from 103.179.218.243 port 40942:11: Bye Bye [preauth]
Dec 04 10:12:21 compute-0 sshd-session[69460]: Disconnected from authenticating user root 103.179.218.243 port 40942 [preauth]
Dec 04 10:12:21 compute-0 sudo[69713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbskkuasqylbzmjjhnmqolpqfodesfio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843140.864817-32-59537486784714/AnsiballZ_systemd.py'
Dec 04 10:12:21 compute-0 sudo[69713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:21 compute-0 python3.9[69715]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 04 10:12:21 compute-0 sudo[69713]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:22 compute-0 sudo[69867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvauyavazqtuzrrhexmfkumtuxhtrkno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843142.057385-40-217083618833767/AnsiballZ_systemd.py'
Dec 04 10:12:22 compute-0 sudo[69867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:22 compute-0 python3.9[69869]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:12:22 compute-0 sudo[69867]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:23 compute-0 sudo[70020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjvkcobvmdoesoaxjfjcjeevtojazruu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843142.939427-49-48273862111129/AnsiballZ_command.py'
Dec 04 10:12:23 compute-0 sudo[70020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:23 compute-0 python3.9[70022]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:12:23 compute-0 sudo[70020]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:24 compute-0 sudo[70173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gayoiwwixuzprwbsprevcnwwqcobiqvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843143.8957229-57-37087089201229/AnsiballZ_stat.py'
Dec 04 10:12:24 compute-0 sudo[70173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:24 compute-0 python3.9[70175]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:12:24 compute-0 sudo[70173]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:24 compute-0 sudo[70327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuipttqjrllqwkdjbdidvoimdubvhctz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843144.667007-65-62726224224720/AnsiballZ_command.py'
Dec 04 10:12:24 compute-0 sudo[70327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:25 compute-0 python3.9[70329]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:12:25 compute-0 sudo[70327]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:25 compute-0 sudo[70482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skpcpzkemsniighgapwrihhqabkzajeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843145.3207686-73-101391292192492/AnsiballZ_file.py'
Dec 04 10:12:25 compute-0 sudo[70482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:25 compute-0 python3.9[70484]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:12:25 compute-0 sudo[70482]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:26 compute-0 sshd-session[69407]: Connection closed by 192.168.122.30 port 46002
Dec 04 10:12:26 compute-0 sshd-session[69404]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:12:26 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 04 10:12:26 compute-0 systemd[1]: session-16.scope: Consumed 4.704s CPU time.
Dec 04 10:12:26 compute-0 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Dec 04 10:12:26 compute-0 systemd-logind[798]: Removed session 16.
Dec 04 10:12:32 compute-0 sshd-session[70509]: Accepted publickey for zuul from 192.168.122.30 port 59620 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:12:32 compute-0 systemd-logind[798]: New session 17 of user zuul.
Dec 04 10:12:32 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 04 10:12:32 compute-0 sshd-session[70509]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:12:32 compute-0 sshd-session[70612]: Received disconnect from 107.175.213.239 port 46718:11: Bye Bye [preauth]
Dec 04 10:12:32 compute-0 sshd-session[70612]: Disconnected from authenticating user root 107.175.213.239 port 46718 [preauth]
Dec 04 10:12:33 compute-0 python3.9[70664]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:12:33 compute-0 sudo[70818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxemdvomikdmxmzljquardehxwdmfnlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843153.6345792-34-11478128378760/AnsiballZ_setup.py'
Dec 04 10:12:33 compute-0 sudo[70818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:34 compute-0 python3.9[70820]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:12:34 compute-0 sudo[70818]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:34 compute-0 sudo[70902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqdfaivtcyunlwbviuyhkfmxggmhuuyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843153.6345792-34-11478128378760/AnsiballZ_dnf.py'
Dec 04 10:12:34 compute-0 sudo[70902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:35 compute-0 python3.9[70904]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 04 10:12:36 compute-0 sudo[70902]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:37 compute-0 python3.9[71055]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:12:38 compute-0 python3.9[71206]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 04 10:12:39 compute-0 python3.9[71356]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:12:39 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:12:39 compute-0 python3.9[71507]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:12:40 compute-0 sshd-session[70512]: Connection closed by 192.168.122.30 port 59620
Dec 04 10:12:40 compute-0 sshd-session[70509]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:12:40 compute-0 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Dec 04 10:12:40 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 04 10:12:40 compute-0 systemd[1]: session-17.scope: Consumed 6.057s CPU time.
Dec 04 10:12:40 compute-0 systemd-logind[798]: Removed session 17.
Dec 04 10:12:49 compute-0 sshd-session[71532]: Connection closed by authenticating user root 59.24.194.207 port 53690 [preauth]
Dec 04 10:12:49 compute-0 sshd-session[71534]: Accepted publickey for zuul from 38.102.83.189 port 60740 ssh2: RSA SHA256:jo727a/7C1xTjXvQrJpywhDS5FmMK+1r+hTQ2rn/09o
Dec 04 10:12:49 compute-0 systemd-logind[798]: New session 18 of user zuul.
Dec 04 10:12:49 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 04 10:12:49 compute-0 sshd-session[71534]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:12:50 compute-0 sudo[71610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikksbgajbbdxgkqtvbejssdyliqlmto ; /usr/bin/python3'
Dec 04 10:12:50 compute-0 sudo[71610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:50 compute-0 useradd[71614]: new group: name=ceph-admin, GID=42478
Dec 04 10:12:50 compute-0 useradd[71614]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 04 10:12:50 compute-0 sudo[71610]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:50 compute-0 sudo[71696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmpjjddeatluchjgeeqaczfzomymlbyg ; /usr/bin/python3'
Dec 04 10:12:50 compute-0 sudo[71696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:50 compute-0 sudo[71696]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:51 compute-0 sudo[71769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oijyrkbtwbhgspgnuiiqplskfadyfgdt ; /usr/bin/python3'
Dec 04 10:12:51 compute-0 sudo[71769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:51 compute-0 sudo[71769]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:51 compute-0 sudo[71819]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkxfekevfdnxblvikmydvilgvnisvqkf ; /usr/bin/python3'
Dec 04 10:12:51 compute-0 sudo[71819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:51 compute-0 sudo[71819]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:52 compute-0 sudo[71845]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izptelizsmeppbfijkiukxtmobyaixig ; /usr/bin/python3'
Dec 04 10:12:52 compute-0 sudo[71845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:52 compute-0 sudo[71845]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:52 compute-0 sudo[71871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdunvtxhxmwihdccvmnxgptrsvuingxs ; /usr/bin/python3'
Dec 04 10:12:52 compute-0 sudo[71871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:52 compute-0 sudo[71871]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:52 compute-0 sudo[71897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upjkcxhiuomrwoqnqqscobweatxctffc ; /usr/bin/python3'
Dec 04 10:12:52 compute-0 sudo[71897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:52 compute-0 sudo[71897]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:53 compute-0 sudo[71975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inwfockrctwojgzckwuytsibixcdrmit ; /usr/bin/python3'
Dec 04 10:12:53 compute-0 sudo[71975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:53 compute-0 sudo[71975]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:53 compute-0 sudo[72050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoyizaecqulryzskiioksumbdmfcdsbc ; /usr/bin/python3'
Dec 04 10:12:53 compute-0 sudo[72050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:53 compute-0 sudo[72050]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:54 compute-0 sshd-session[71995]: Invalid user deploy from 217.154.62.22 port 41828
Dec 04 10:12:54 compute-0 sshd-session[71995]: Received disconnect from 217.154.62.22 port 41828:11: Bye Bye [preauth]
Dec 04 10:12:54 compute-0 sshd-session[71995]: Disconnected from invalid user deploy 217.154.62.22 port 41828 [preauth]
Dec 04 10:12:54 compute-0 sudo[72152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aonprpjkfouazbkwloxbqmdzanmbaqex ; /usr/bin/python3'
Dec 04 10:12:54 compute-0 sudo[72152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:54 compute-0 sudo[72152]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:54 compute-0 sudo[72225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkgwerdfcltakyzvmlzeebfijrdwyxgo ; /usr/bin/python3'
Dec 04 10:12:54 compute-0 sudo[72225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:54 compute-0 sudo[72225]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:55 compute-0 sudo[72277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usixlgqejmnwfwwxeuouagnnkhgwtucm ; /usr/bin/python3'
Dec 04 10:12:55 compute-0 sudo[72277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:55 compute-0 sshd-session[72252]: Received disconnect from 74.249.218.27 port 55384:11: Bye Bye [preauth]
Dec 04 10:12:55 compute-0 sshd-session[72252]: Disconnected from authenticating user root 74.249.218.27 port 55384 [preauth]
Dec 04 10:12:55 compute-0 python3[72279]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:12:56 compute-0 sudo[72277]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:57 compute-0 sudo[72372]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzxxsodiayokqipvgklzhtgxzknlmlzu ; /usr/bin/python3'
Dec 04 10:12:57 compute-0 sudo[72372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:57 compute-0 python3[72374]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 04 10:12:58 compute-0 sudo[72372]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:58 compute-0 sudo[72399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgscorpzaklggyfrlsqbjkglhkifuwpn ; /usr/bin/python3'
Dec 04 10:12:58 compute-0 sudo[72399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:58 compute-0 python3[72401]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 04 10:12:58 compute-0 sudo[72399]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:59 compute-0 sudo[72425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgtgoxlvfkdwrefgqvhhqlxgrgfvqsis ; /usr/bin/python3'
Dec 04 10:12:59 compute-0 sudo[72425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:59 compute-0 python3[72427]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:12:59 compute-0 kernel: loop: module loaded
Dec 04 10:12:59 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 04 10:12:59 compute-0 sudo[72425]: pam_unix(sudo:session): session closed for user root
Dec 04 10:12:59 compute-0 sudo[72460]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scxaodinikluampacvjirawnmyiaesrw ; /usr/bin/python3'
Dec 04 10:12:59 compute-0 sudo[72460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:12:59 compute-0 python3[72462]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:12:59 compute-0 lvm[72465]: PV /dev/loop3 not used.
Dec 04 10:12:59 compute-0 lvm[72474]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:12:59 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 04 10:13:00 compute-0 lvm[72476]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 04 10:13:00 compute-0 sudo[72460]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:00 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 04 10:13:00 compute-0 sudo[72552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnfbxuroolgrpkmemlcueeaqdakiovgc ; /usr/bin/python3'
Dec 04 10:13:00 compute-0 sudo[72552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:00 compute-0 python3[72554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 10:13:00 compute-0 sudo[72552]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:00 compute-0 sudo[72625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpbroxdbymgfyuddzgapoyatsnpdugzd ; /usr/bin/python3'
Dec 04 10:13:00 compute-0 sudo[72625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:00 compute-0 python3[72627]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843180.1285412-36129-216918035400012/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:13:00 compute-0 sudo[72625]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:01 compute-0 sudo[72675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vatqmvgomerqpxtsbnslvnfcbnyprtgv ; /usr/bin/python3'
Dec 04 10:13:01 compute-0 sudo[72675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:01 compute-0 python3[72677]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:13:01 compute-0 systemd[1]: Reloading.
Dec 04 10:13:01 compute-0 systemd-rc-local-generator[72704]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:13:01 compute-0 systemd-sysv-generator[72709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:13:01 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 04 10:13:01 compute-0 bash[72717]: /dev/loop3: [64513]:4327949 (/var/lib/ceph-osd-0.img)
Dec 04 10:13:01 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 04 10:13:01 compute-0 sudo[72675]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:01 compute-0 lvm[72718]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:13:01 compute-0 lvm[72718]: VG ceph_vg0 finished
Dec 04 10:13:02 compute-0 sudo[72742]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwjjzxzakknlbjsmnfyzruaufbiuqaao ; /usr/bin/python3'
Dec 04 10:13:02 compute-0 sudo[72742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:02 compute-0 python3[72744]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 04 10:13:03 compute-0 sudo[72742]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:03 compute-0 sudo[72769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfplqyanvpiaqgyldsxcmpskzwsapfkj ; /usr/bin/python3'
Dec 04 10:13:03 compute-0 sudo[72769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:04 compute-0 python3[72771]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 04 10:13:04 compute-0 sudo[72769]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:04 compute-0 sudo[72795]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbhifuxxihtgsajkvdtqqhkuijxtlibs ; /usr/bin/python3'
Dec 04 10:13:04 compute-0 sudo[72795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:04 compute-0 python3[72797]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:13:04 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec 04 10:13:04 compute-0 sudo[72795]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:04 compute-0 sudo[72827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvlffwxoalkqzloxyiupmopbafozwmfn ; /usr/bin/python3'
Dec 04 10:13:04 compute-0 sudo[72827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:04 compute-0 python3[72829]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:13:04 compute-0 lvm[72832]: PV /dev/loop4 not used.
Dec 04 10:13:05 compute-0 lvm[72841]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:13:05 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 04 10:13:05 compute-0 lvm[72843]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 04 10:13:05 compute-0 sudo[72827]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:05 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 04 10:13:05 compute-0 sudo[72920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxmgxjinctrebnkdcdccjwofarecoicn ; /usr/bin/python3'
Dec 04 10:13:05 compute-0 sudo[72920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:05 compute-0 python3[72922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 10:13:05 compute-0 sudo[72920]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:05 compute-0 sudo[72993]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfgzysvvyazkpnsyashltlktrlhzfccy ; /usr/bin/python3'
Dec 04 10:13:05 compute-0 sudo[72993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:05 compute-0 python3[72995]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843185.2448385-36156-259620473456338/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:13:05 compute-0 sudo[72993]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:06 compute-0 sudo[73043]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbvuzpcmufcpvhcyhjzppunfgulrlpul ; /usr/bin/python3'
Dec 04 10:13:06 compute-0 sudo[73043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:06 compute-0 python3[73045]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:13:06 compute-0 systemd[1]: Reloading.
Dec 04 10:13:06 compute-0 systemd-rc-local-generator[73079]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:13:06 compute-0 systemd-sysv-generator[73084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:13:06 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 04 10:13:06 compute-0 bash[73088]: /dev/loop4: [64513]:4327955 (/var/lib/ceph-osd-1.img)
Dec 04 10:13:06 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 04 10:13:06 compute-0 lvm[73089]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:13:06 compute-0 lvm[73089]: VG ceph_vg1 finished
Dec 04 10:13:06 compute-0 sudo[73043]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:06 compute-0 chronyd[58735]: Selected source 207.34.48.31 (pool.ntp.org)
Dec 04 10:13:06 compute-0 sudo[73113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yllqbaodjncwrhowgrzwmqvoazzbwayt ; /usr/bin/python3'
Dec 04 10:13:06 compute-0 sudo[73113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:07 compute-0 python3[73115]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 04 10:13:08 compute-0 sudo[73113]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:08 compute-0 sudo[73140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdyrjzxwmwsnsnvaojtupngkdnnvvjzd ; /usr/bin/python3'
Dec 04 10:13:08 compute-0 sudo[73140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:08 compute-0 python3[73142]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 04 10:13:08 compute-0 sudo[73140]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:08 compute-0 sudo[73166]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynzaxfteevrrnsoqpewsdluedwijqdag ; /usr/bin/python3'
Dec 04 10:13:08 compute-0 sudo[73166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:09 compute-0 python3[73168]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:13:09 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec 04 10:13:09 compute-0 sudo[73166]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:09 compute-0 sudo[73198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyiumexsrkyoltcakjsfyyiqowabsduj ; /usr/bin/python3'
Dec 04 10:13:09 compute-0 sudo[73198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:09 compute-0 python3[73200]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:13:09 compute-0 lvm[73203]: PV /dev/loop5 not used.
Dec 04 10:13:09 compute-0 lvm[73205]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:13:09 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec 04 10:13:10 compute-0 lvm[73212]:   1 logical volume(s) in volume group "ceph_vg2" now active
Dec 04 10:13:10 compute-0 lvm[73216]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:13:10 compute-0 lvm[73216]: VG ceph_vg2 finished
Dec 04 10:13:10 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec 04 10:13:10 compute-0 sudo[73198]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:10 compute-0 sudo[73293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybpcciwwdbibadeeegyoecumhyaiiytc ; /usr/bin/python3'
Dec 04 10:13:10 compute-0 sudo[73293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:10 compute-0 python3[73295]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 10:13:10 compute-0 sudo[73293]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:10 compute-0 sudo[73366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbrxdrcewqpjyqqsaltqopgzbuctxzlf ; /usr/bin/python3'
Dec 04 10:13:10 compute-0 sudo[73366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:10 compute-0 python3[73368]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843190.2001336-36183-43864294763365/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:13:10 compute-0 sudo[73366]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:11 compute-0 sudo[73416]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghzjepitptvhfoxrrefrnwnogppxyoff ; /usr/bin/python3'
Dec 04 10:13:11 compute-0 sudo[73416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:11 compute-0 python3[73418]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:13:11 compute-0 systemd[1]: Reloading.
Dec 04 10:13:11 compute-0 systemd-rc-local-generator[73448]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:13:11 compute-0 systemd-sysv-generator[73453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:13:12 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 04 10:13:12 compute-0 bash[73458]: /dev/loop5: [64513]:4327958 (/var/lib/ceph-osd-2.img)
Dec 04 10:13:12 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 04 10:13:12 compute-0 sudo[73416]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:12 compute-0 lvm[73459]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:13:12 compute-0 lvm[73459]: VG ceph_vg2 finished
Dec 04 10:13:14 compute-0 python3[73483]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:13:16 compute-0 sudo[73576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muraeuwrhjgykvpxkgyaegqekfvfvcql ; /usr/bin/python3'
Dec 04 10:13:16 compute-0 sudo[73576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:16 compute-0 sshd-session[73551]: Invalid user zjw from 103.149.86.230 port 54824
Dec 04 10:13:16 compute-0 python3[73578]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 04 10:13:16 compute-0 sshd-session[73551]: Received disconnect from 103.149.86.230 port 54824:11: Bye Bye [preauth]
Dec 04 10:13:16 compute-0 sshd-session[73551]: Disconnected from invalid user zjw 103.149.86.230 port 54824 [preauth]
Dec 04 10:13:18 compute-0 sudo[73576]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:19 compute-0 sudo[73633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phtehfgnyswwzzvgalehasfjtdmudxhc ; /usr/bin/python3'
Dec 04 10:13:19 compute-0 sudo[73633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:19 compute-0 python3[73635]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 04 10:13:22 compute-0 groupadd[73645]: group added to /etc/group: name=cephadm, GID=992
Dec 04 10:13:22 compute-0 groupadd[73645]: group added to /etc/gshadow: name=cephadm
Dec 04 10:13:22 compute-0 groupadd[73645]: new group: name=cephadm, GID=992
Dec 04 10:13:22 compute-0 useradd[73652]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 04 10:13:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 04 10:13:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 04 10:13:23 compute-0 sudo[73633]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 04 10:13:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 04 10:13:23 compute-0 systemd[1]: run-r47ff6ba7ca8442d3bf258dbb2dbe7cfe.service: Deactivated successfully.
Dec 04 10:13:23 compute-0 sudo[73754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzitmoytsojxuggdodgvrzcgpavofrta ; /usr/bin/python3'
Dec 04 10:13:23 compute-0 sudo[73754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:23 compute-0 python3[73756]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 04 10:13:23 compute-0 sudo[73754]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:23 compute-0 sudo[73782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjanffqccnkgrmuvgmszxhlffqppgjyh ; /usr/bin/python3'
Dec 04 10:13:23 compute-0 sudo[73782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:24 compute-0 python3[73784]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:13:24 compute-0 sudo[73782]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:24 compute-0 sudo[73821]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wetnbtezbmhtkrimaszuwfhfvrmvnpfl ; /usr/bin/python3'
Dec 04 10:13:24 compute-0 sudo[73821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:25 compute-0 python3[73823]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:13:25 compute-0 sudo[73821]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:25 compute-0 sudo[73847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngjqjjpbzmeeoaxsdvlhlkxfxprkhmsm ; /usr/bin/python3'
Dec 04 10:13:25 compute-0 sudo[73847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:25 compute-0 python3[73849]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:13:25 compute-0 sudo[73847]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:26 compute-0 sudo[73925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsrqdsirmtwynaowolqrqoqyzgmvbxvw ; /usr/bin/python3'
Dec 04 10:13:26 compute-0 sudo[73925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:26 compute-0 python3[73927]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 10:13:26 compute-0 sudo[73925]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:26 compute-0 sudo[73998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nolphrvccypcdkgpxnlstebtopqtxski ; /usr/bin/python3'
Dec 04 10:13:26 compute-0 sudo[73998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:26 compute-0 python3[74000]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843205.8727083-36331-233421612422645/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:13:26 compute-0 sudo[73998]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:27 compute-0 sudo[74100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fylbcmhpsgkizsywgezidxywfrlnigha ; /usr/bin/python3'
Dec 04 10:13:27 compute-0 sudo[74100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:27 compute-0 python3[74102]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 10:13:27 compute-0 sudo[74100]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:27 compute-0 sudo[74173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajbbhaqspiyoqstrlqxfuolkmokizopq ; /usr/bin/python3'
Dec 04 10:13:27 compute-0 sudo[74173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:27 compute-0 python3[74175]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843207.1810696-36349-10698057980318/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:13:27 compute-0 sudo[74173]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:28 compute-0 sudo[74223]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bewbeobxolchuaebujrgvvyfqenuirda ; /usr/bin/python3'
Dec 04 10:13:28 compute-0 sudo[74223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:28 compute-0 python3[74225]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 04 10:13:28 compute-0 sudo[74223]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:28 compute-0 sudo[74251]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdfnckaezqdwyufpeseyjjrgxaruevrd ; /usr/bin/python3'
Dec 04 10:13:28 compute-0 sudo[74251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:28 compute-0 python3[74253]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 04 10:13:28 compute-0 sudo[74251]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:28 compute-0 sudo[74279]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abawjshdsgcrfkpufoqkmbdbvywtdqbb ; /usr/bin/python3'
Dec 04 10:13:28 compute-0 sudo[74279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:29 compute-0 python3[74281]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 04 10:13:29 compute-0 sudo[74279]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:29 compute-0 sudo[74307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meqzalmshnykcwuzbrhmpxrhmbmompmn ; /usr/bin/python3'
Dec 04 10:13:29 compute-0 sudo[74307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:13:29 compute-0 python3[74309]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:13:29 compute-0 sshd-session[74313]: Accepted publickey for ceph-admin from 192.168.122.100 port 43920 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:13:29 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 04 10:13:29 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 04 10:13:29 compute-0 systemd-logind[798]: New session 19 of user ceph-admin.
Dec 04 10:13:29 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 04 10:13:29 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 04 10:13:29 compute-0 systemd[74317]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:13:29 compute-0 systemd[74317]: Queued start job for default target Main User Target.
Dec 04 10:13:29 compute-0 systemd[74317]: Created slice User Application Slice.
Dec 04 10:13:29 compute-0 systemd[74317]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 04 10:13:29 compute-0 systemd[74317]: Started Daily Cleanup of User's Temporary Directories.
Dec 04 10:13:29 compute-0 systemd[74317]: Reached target Paths.
Dec 04 10:13:29 compute-0 systemd[74317]: Reached target Timers.
Dec 04 10:13:29 compute-0 systemd[74317]: Starting D-Bus User Message Bus Socket...
Dec 04 10:13:29 compute-0 systemd[74317]: Starting Create User's Volatile Files and Directories...
Dec 04 10:13:29 compute-0 systemd[74317]: Finished Create User's Volatile Files and Directories.
Dec 04 10:13:29 compute-0 systemd[74317]: Listening on D-Bus User Message Bus Socket.
Dec 04 10:13:29 compute-0 systemd[74317]: Reached target Sockets.
Dec 04 10:13:29 compute-0 systemd[74317]: Reached target Basic System.
Dec 04 10:13:29 compute-0 systemd[74317]: Reached target Main User Target.
Dec 04 10:13:29 compute-0 systemd[74317]: Startup finished in 129ms.
Dec 04 10:13:29 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 04 10:13:29 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Dec 04 10:13:29 compute-0 sshd-session[74313]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:13:29 compute-0 sudo[74333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 04 10:13:29 compute-0 sudo[74333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:13:29 compute-0 sudo[74333]: pam_unix(sudo:session): session closed for user root
Dec 04 10:13:29 compute-0 sshd-session[74332]: Received disconnect from 192.168.122.100 port 43920:11: disconnected by user
Dec 04 10:13:29 compute-0 sshd-session[74332]: Disconnected from user ceph-admin 192.168.122.100 port 43920
Dec 04 10:13:29 compute-0 sshd-session[74313]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 04 10:13:29 compute-0 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Dec 04 10:13:29 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 04 10:13:29 compute-0 systemd-logind[798]: Removed session 19.
Dec 04 10:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1353328872-lower\x2dmapped.mount: Deactivated successfully.
Dec 04 10:13:40 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 04 10:13:40 compute-0 systemd[74317]: Activating special unit Exit the Session...
Dec 04 10:13:40 compute-0 systemd[74317]: Stopped target Main User Target.
Dec 04 10:13:40 compute-0 systemd[74317]: Stopped target Basic System.
Dec 04 10:13:40 compute-0 systemd[74317]: Stopped target Paths.
Dec 04 10:13:40 compute-0 systemd[74317]: Stopped target Sockets.
Dec 04 10:13:40 compute-0 systemd[74317]: Stopped target Timers.
Dec 04 10:13:40 compute-0 systemd[74317]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 04 10:13:40 compute-0 systemd[74317]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 04 10:13:40 compute-0 systemd[74317]: Closed D-Bus User Message Bus Socket.
Dec 04 10:13:40 compute-0 systemd[74317]: Stopped Create User's Volatile Files and Directories.
Dec 04 10:13:40 compute-0 systemd[74317]: Removed slice User Application Slice.
Dec 04 10:13:40 compute-0 systemd[74317]: Reached target Shutdown.
Dec 04 10:13:40 compute-0 systemd[74317]: Finished Exit the Session.
Dec 04 10:13:40 compute-0 systemd[74317]: Reached target Exit the Session.
Dec 04 10:13:40 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 04 10:13:40 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 04 10:13:40 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 04 10:13:40 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 04 10:13:40 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 04 10:13:40 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 04 10:13:40 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 04 10:13:42 compute-0 sshd-session[74472]: Received disconnect from 103.179.218.243 port 41052:11: Bye Bye [preauth]
Dec 04 10:13:42 compute-0 sshd-session[74472]: Disconnected from authenticating user root 103.179.218.243 port 41052 [preauth]
Dec 04 10:13:59 compute-0 podman[74410]: 2025-12-04 10:13:59.917654865 +0000 UTC m=+29.666856772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:13:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:13:59 compute-0 podman[74479]: 2025-12-04 10:13:59.985148747 +0000 UTC m=+0.037565295 container create 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:00 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 04 10:14:00 compute-0 systemd[1]: Started libpod-conmon-96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa.scope.
Dec 04 10:14:00 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:00 compute-0 podman[74479]: 2025-12-04 10:13:59.966555925 +0000 UTC m=+0.018972393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:00 compute-0 podman[74479]: 2025-12-04 10:14:00.068955576 +0000 UTC m=+0.121372044 container init 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:14:00 compute-0 podman[74479]: 2025-12-04 10:14:00.074852949 +0000 UTC m=+0.127269387 container start 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:00 compute-0 podman[74479]: 2025-12-04 10:14:00.079964594 +0000 UTC m=+0.132381042 container attach 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:00 compute-0 hungry_lovelace[74495]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 04 10:14:00 compute-0 systemd[1]: libpod-96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa.scope: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74479]: 2025-12-04 10:14:00.1805037 +0000 UTC m=+0.232920188 container died 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a183c112e72f6f87f851015dae2b418fb21a32da2d1fff53ef4706adaf788c5c-merged.mount: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74479]: 2025-12-04 10:14:00.221814814 +0000 UTC m=+0.274231262 container remove 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:00 compute-0 systemd[1]: libpod-conmon-96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa.scope: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74511]: 2025-12-04 10:14:00.284161261 +0000 UTC m=+0.043184222 container create 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 04 10:14:00 compute-0 systemd[1]: Started libpod-conmon-3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5.scope.
Dec 04 10:14:00 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:00 compute-0 podman[74511]: 2025-12-04 10:14:00.343852792 +0000 UTC m=+0.102875733 container init 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:00 compute-0 podman[74511]: 2025-12-04 10:14:00.349158362 +0000 UTC m=+0.108181273 container start 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:00 compute-0 frosty_carson[74528]: 167 167
Dec 04 10:14:00 compute-0 systemd[1]: libpod-3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5.scope: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74511]: 2025-12-04 10:14:00.352996805 +0000 UTC m=+0.112019736 container attach 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:14:00 compute-0 podman[74511]: 2025-12-04 10:14:00.35360258 +0000 UTC m=+0.112625501 container died 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:00 compute-0 podman[74511]: 2025-12-04 10:14:00.26601891 +0000 UTC m=+0.025041851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:00 compute-0 podman[74511]: 2025-12-04 10:14:00.382954614 +0000 UTC m=+0.141977545 container remove 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:14:00 compute-0 systemd[1]: libpod-conmon-3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5.scope: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74544]: 2025-12-04 10:14:00.445432614 +0000 UTC m=+0.040642610 container create f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:00 compute-0 systemd[1]: Started libpod-conmon-f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1.scope.
Dec 04 10:14:00 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:00 compute-0 podman[74544]: 2025-12-04 10:14:00.502971803 +0000 UTC m=+0.098181819 container init f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:14:00 compute-0 podman[74544]: 2025-12-04 10:14:00.50734018 +0000 UTC m=+0.102550176 container start f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 04 10:14:00 compute-0 podman[74544]: 2025-12-04 10:14:00.510385473 +0000 UTC m=+0.105595519 container attach f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:00 compute-0 dreamy_leavitt[74560]: AQDoXjFpudVHHxAAseK2Wow9iO9o3+Ir2a2qrw==
Dec 04 10:14:00 compute-0 podman[74544]: 2025-12-04 10:14:00.429519086 +0000 UTC m=+0.024729092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:00 compute-0 systemd[1]: libpod-f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1.scope: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74544]: 2025-12-04 10:14:00.528072224 +0000 UTC m=+0.123282230 container died f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 04 10:14:00 compute-0 podman[74544]: 2025-12-04 10:14:00.56817168 +0000 UTC m=+0.163381676 container remove f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:00 compute-0 systemd[1]: libpod-conmon-f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1.scope: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74579]: 2025-12-04 10:14:00.641730419 +0000 UTC m=+0.056138667 container create 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:00 compute-0 systemd[1]: Started libpod-conmon-9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932.scope.
Dec 04 10:14:00 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:00 compute-0 podman[74579]: 2025-12-04 10:14:00.697454764 +0000 UTC m=+0.111863082 container init 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:00 compute-0 podman[74579]: 2025-12-04 10:14:00.610793746 +0000 UTC m=+0.025202084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:00 compute-0 podman[74579]: 2025-12-04 10:14:00.703600383 +0000 UTC m=+0.118008661 container start 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:00 compute-0 podman[74579]: 2025-12-04 10:14:00.708301938 +0000 UTC m=+0.122710226 container attach 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:00 compute-0 priceless_easley[74595]: AQDoXjFpzJYdLBAA9cSiudMhfVlhS9w0KRsFdw==
Dec 04 10:14:00 compute-0 systemd[1]: libpod-9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932.scope: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74579]: 2025-12-04 10:14:00.746545838 +0000 UTC m=+0.160954156 container died 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:14:00 compute-0 podman[74579]: 2025-12-04 10:14:00.789817641 +0000 UTC m=+0.204225929 container remove 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:14:00 compute-0 systemd[1]: libpod-conmon-9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932.scope: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74615]: 2025-12-04 10:14:00.871367625 +0000 UTC m=+0.055954332 container create 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:00 compute-0 systemd[1]: Started libpod-conmon-3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6.scope.
Dec 04 10:14:00 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:00 compute-0 podman[74615]: 2025-12-04 10:14:00.845584028 +0000 UTC m=+0.030170795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:00 compute-0 podman[74615]: 2025-12-04 10:14:00.949808993 +0000 UTC m=+0.134395740 container init 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:14:00 compute-0 podman[74615]: 2025-12-04 10:14:00.958309149 +0000 UTC m=+0.142895866 container start 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:14:00 compute-0 podman[74615]: 2025-12-04 10:14:00.962807959 +0000 UTC m=+0.147394726 container attach 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:00 compute-0 jolly_burnell[74632]: AQDoXjFp1z2GOhAA+dIeJ0tGhdDx9kUod6sTpQ==
Dec 04 10:14:00 compute-0 systemd[1]: libpod-3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6.scope: Deactivated successfully.
Dec 04 10:14:00 compute-0 podman[74615]: 2025-12-04 10:14:00.987981991 +0000 UTC m=+0.172568698 container died 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-205bef148c779b469b9afd8665e028752dde0f9e4d61ca282a7f3efe0406dec0-merged.mount: Deactivated successfully.
Dec 04 10:14:01 compute-0 podman[74615]: 2025-12-04 10:14:01.037142907 +0000 UTC m=+0.221729614 container remove 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:14:01 compute-0 systemd[1]: libpod-conmon-3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6.scope: Deactivated successfully.
Dec 04 10:14:01 compute-0 podman[74650]: 2025-12-04 10:14:01.135521871 +0000 UTC m=+0.064635613 container create 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Dec 04 10:14:01 compute-0 systemd[1]: Started libpod-conmon-2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288.scope.
Dec 04 10:14:01 compute-0 podman[74650]: 2025-12-04 10:14:01.108338829 +0000 UTC m=+0.037452611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:01 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbbfb0286447243a590856cd11b35f9795603c79d38db212ade8df19b4f486/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:01 compute-0 podman[74650]: 2025-12-04 10:14:01.238638748 +0000 UTC m=+0.167752550 container init 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 04 10:14:01 compute-0 podman[74650]: 2025-12-04 10:14:01.247459864 +0000 UTC m=+0.176573616 container start 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 04 10:14:01 compute-0 podman[74650]: 2025-12-04 10:14:01.251320997 +0000 UTC m=+0.180434789 container attach 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:14:01 compute-0 confident_fermi[74666]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 04 10:14:01 compute-0 confident_fermi[74666]: setting min_mon_release = tentacle
Dec 04 10:14:01 compute-0 confident_fermi[74666]: /usr/bin/monmaptool: set fsid to f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:01 compute-0 confident_fermi[74666]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 04 10:14:01 compute-0 systemd[1]: libpod-2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288.scope: Deactivated successfully.
Dec 04 10:14:01 compute-0 podman[74650]: 2025-12-04 10:14:01.301042727 +0000 UTC m=+0.230156469 container died 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:14:01 compute-0 podman[74650]: 2025-12-04 10:14:01.349324601 +0000 UTC m=+0.278438323 container remove 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:01 compute-0 systemd[1]: libpod-conmon-2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288.scope: Deactivated successfully.
Dec 04 10:14:01 compute-0 podman[74686]: 2025-12-04 10:14:01.428958238 +0000 UTC m=+0.052084558 container create 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:01 compute-0 systemd[1]: Started libpod-conmon-73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f.scope.
Dec 04 10:14:01 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ccba778261ec811496283ab7e90c6244fe93138067f0c1f084607a81d7d4/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ccba778261ec811496283ab7e90c6244fe93138067f0c1f084607a81d7d4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ccba778261ec811496283ab7e90c6244fe93138067f0c1f084607a81d7d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ccba778261ec811496283ab7e90c6244fe93138067f0c1f084607a81d7d4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:01 compute-0 podman[74686]: 2025-12-04 10:14:01.404504634 +0000 UTC m=+0.027630994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:01 compute-0 podman[74686]: 2025-12-04 10:14:01.503148133 +0000 UTC m=+0.126274473 container init 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:14:01 compute-0 podman[74686]: 2025-12-04 10:14:01.507961671 +0000 UTC m=+0.131088001 container start 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:14:01 compute-0 podman[74686]: 2025-12-04 10:14:01.512494411 +0000 UTC m=+0.135620771 container attach 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:01 compute-0 systemd[1]: libpod-73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f.scope: Deactivated successfully.
Dec 04 10:14:01 compute-0 podman[74686]: 2025-12-04 10:14:01.609234294 +0000 UTC m=+0.232360624 container died 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:01 compute-0 podman[74686]: 2025-12-04 10:14:01.645196489 +0000 UTC m=+0.268322829 container remove 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 10:14:01 compute-0 systemd[1]: libpod-conmon-73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f.scope: Deactivated successfully.
Dec 04 10:14:01 compute-0 systemd[1]: Reloading.
Dec 04 10:14:01 compute-0 systemd-rc-local-generator[74766]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:01 compute-0 systemd-sysv-generator[74770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:01 compute-0 systemd[1]: Reloading.
Dec 04 10:14:02 compute-0 systemd-sysv-generator[74808]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:02 compute-0 systemd-rc-local-generator[74803]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:02 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 04 10:14:02 compute-0 systemd[1]: Reloading.
Dec 04 10:14:02 compute-0 systemd-rc-local-generator[74842]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:02 compute-0 systemd-sysv-generator[74845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:02 compute-0 systemd[1]: Reached target Ceph cluster f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:14:02 compute-0 sshd-session[74477]: Invalid user teste from 101.47.163.20 port 47342
Dec 04 10:14:02 compute-0 systemd[1]: Reloading.
Dec 04 10:14:02 compute-0 systemd-rc-local-generator[74886]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:02 compute-0 systemd-sysv-generator[74890]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:02 compute-0 systemd[1]: Reloading.
Dec 04 10:14:02 compute-0 systemd-sysv-generator[74927]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:02 compute-0 systemd-rc-local-generator[74924]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:03 compute-0 systemd[1]: Created slice Slice /system/ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:14:03 compute-0 systemd[1]: Reached target System Time Set.
Dec 04 10:14:03 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 04 10:14:03 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:14:03 compute-0 podman[74983]: 2025-12-04 10:14:03.357208104 +0000 UTC m=+0.057066609 container create d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:03 compute-0 podman[74983]: 2025-12-04 10:14:03.419500139 +0000 UTC m=+0.119358654 container init d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:03 compute-0 podman[74983]: 2025-12-04 10:14:03.332245147 +0000 UTC m=+0.032103732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:03 compute-0 podman[74983]: 2025-12-04 10:14:03.428682163 +0000 UTC m=+0.128540658 container start d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:14:03 compute-0 bash[74983]: d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993
Dec 04 10:14:03 compute-0 systemd[1]: Started Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:14:03 compute-0 ceph-mon[75003]: set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 04 10:14:03 compute-0 ceph-mon[75003]: pidfile_write: ignore empty --pid-file
Dec 04 10:14:03 compute-0 ceph-mon[75003]: load: jerasure load: lrc 
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: RocksDB version: 7.9.2
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Git sha 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: DB SUMMARY
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: DB Session ID:  7WT4DFD6J7L4496MS03O
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: CURRENT file:  CURRENT
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: IDENTITY file:  IDENTITY
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                         Options.error_if_exists: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                       Options.create_if_missing: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                         Options.paranoid_checks: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                                     Options.env: 0x55a31fe52440
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                                Options.info_log: 0x55a320db93e0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.max_file_opening_threads: 16
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                              Options.statistics: (nil)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                               Options.use_fsync: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                       Options.max_log_file_size: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                         Options.allow_fallocate: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                        Options.use_direct_reads: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:          Options.create_missing_column_families: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                              Options.db_log_dir: 
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                                 Options.wal_dir: 
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                   Options.advise_random_on_open: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                    Options.write_buffer_manager: 0x55a320d38140
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                            Options.rate_limiter: (nil)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.unordered_write: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                               Options.row_cache: None
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                              Options.wal_filter: None
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.allow_ingest_behind: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.two_write_queues: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.manual_wal_flush: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.wal_compression: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.atomic_flush: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                 Options.log_readahead_size: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.allow_data_in_errors: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.db_host_id: __hostname__
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.max_background_jobs: 2
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.max_background_compactions: -1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.max_subcompactions: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.max_total_wal_size: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                          Options.max_open_files: -1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                          Options.bytes_per_sync: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:       Options.compaction_readahead_size: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.max_background_flushes: -1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Compression algorithms supported:
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         kZSTD supported: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         kXpressCompression supported: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         kBZip2Compression supported: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         kLZ4Compression supported: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         kZlibCompression supported: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         kLZ4HCCompression supported: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         kSnappyCompression supported: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:           Options.merge_operator: 
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:        Options.compaction_filter: None
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a320d44600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a320d298d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:        Options.write_buffer_size: 33554432
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:  Options.max_write_buffer_number: 2
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:          Options.compression: NoCompression
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.num_levels: 7
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bea4932-39ce-4c6c-8b9b-253595ae5108
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843243486470, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843243489570, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "7WT4DFD6J7L4496MS03O", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843243489694, "job": 1, "event": "recovery_finished"}
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a320d56e00
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: DB pointer 0x55a320ea2000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:14:03 compute-0 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a320d298d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 04 10:14:03 compute-0 ceph-mon[75003]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@-1(???) e0 preinit fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 04 10:14:03 compute-0 ceph-mon[75003]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 04 10:14:03 compute-0 podman[75004]: 2025-12-04 10:14:03.523212072 +0000 UTC m=+0.051948694 container create 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 04 10:14:03 compute-0 ceph-mon[75003]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : last_changed 2025-12-04T10:14:01.294217+0000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : created 2025-12-04T10:14:01.294217+0000
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2025-12-04T10:14:01.553789Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).mds e1 new map
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-04T10:14:03:532003+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : fsmap 
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mkfs f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 04 10:14:03 compute-0 systemd[1]: Started libpod-conmon-4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d.scope.
Dec 04 10:14:03 compute-0 podman[75004]: 2025-12-04 10:14:03.503226646 +0000 UTC m=+0.031963288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:03 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848b28f9c824aed493a13559723d96d6cbe9183ff82192101644a5799620969f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848b28f9c824aed493a13559723d96d6cbe9183ff82192101644a5799620969f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848b28f9c824aed493a13559723d96d6cbe9183ff82192101644a5799620969f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:03 compute-0 podman[75004]: 2025-12-04 10:14:03.641580462 +0000 UTC m=+0.170317164 container init 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 04 10:14:03 compute-0 podman[75004]: 2025-12-04 10:14:03.652416445 +0000 UTC m=+0.181153097 container start 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 04 10:14:03 compute-0 podman[75004]: 2025-12-04 10:14:03.656129856 +0000 UTC m=+0.184866498 container attach 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 04 10:14:03 compute-0 ceph-mon[75003]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692268427' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 04 10:14:03 compute-0 modest_payne[75058]:   cluster:
Dec 04 10:14:03 compute-0 modest_payne[75058]:     id:     f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:03 compute-0 modest_payne[75058]:     health: HEALTH_OK
Dec 04 10:14:03 compute-0 modest_payne[75058]:  
Dec 04 10:14:03 compute-0 modest_payne[75058]:   services:
Dec 04 10:14:03 compute-0 modest_payne[75058]:     mon: 1 daemons, quorum compute-0 (age 0.311425s) [leader: compute-0]
Dec 04 10:14:03 compute-0 modest_payne[75058]:     mgr: no daemons active
Dec 04 10:14:03 compute-0 modest_payne[75058]:     osd: 0 osds: 0 up, 0 in
Dec 04 10:14:03 compute-0 modest_payne[75058]:  
Dec 04 10:14:03 compute-0 modest_payne[75058]:   data:
Dec 04 10:14:03 compute-0 modest_payne[75058]:     pools:   0 pools, 0 pgs
Dec 04 10:14:03 compute-0 modest_payne[75058]:     objects: 0 objects, 0 B
Dec 04 10:14:03 compute-0 modest_payne[75058]:     usage:   0 B used, 0 B / 0 B avail
Dec 04 10:14:03 compute-0 modest_payne[75058]:     pgs:     
Dec 04 10:14:03 compute-0 modest_payne[75058]:  
Dec 04 10:14:03 compute-0 systemd[1]: libpod-4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d.scope: Deactivated successfully.
Dec 04 10:14:03 compute-0 podman[75004]: 2025-12-04 10:14:03.858347154 +0000 UTC m=+0.387083816 container died 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:14:03 compute-0 podman[75004]: 2025-12-04 10:14:03.896226356 +0000 UTC m=+0.424962968 container remove 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:03 compute-0 systemd[1]: libpod-conmon-4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d.scope: Deactivated successfully.
Dec 04 10:14:03 compute-0 podman[75095]: 2025-12-04 10:14:03.966091655 +0000 UTC m=+0.046597724 container create 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:04 compute-0 systemd[1]: Started libpod-conmon-7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07.scope.
Dec 04 10:14:04 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:04 compute-0 podman[75095]: 2025-12-04 10:14:03.941641891 +0000 UTC m=+0.022147950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:04 compute-0 podman[75095]: 2025-12-04 10:14:04.05015015 +0000 UTC m=+0.130656229 container init 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:14:04 compute-0 podman[75095]: 2025-12-04 10:14:04.062208644 +0000 UTC m=+0.142714683 container start 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:14:04 compute-0 podman[75095]: 2025-12-04 10:14:04.066434366 +0000 UTC m=+0.146940435 container attach 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:04 compute-0 ceph-mon[75003]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 04 10:14:04 compute-0 ceph-mon[75003]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1561490116' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 04 10:14:04 compute-0 ceph-mon[75003]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1561490116' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 04 10:14:04 compute-0 silly_lewin[75112]: 
Dec 04 10:14:04 compute-0 silly_lewin[75112]: [global]
Dec 04 10:14:04 compute-0 silly_lewin[75112]:         fsid = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:04 compute-0 silly_lewin[75112]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 04 10:14:04 compute-0 silly_lewin[75112]:         osd_crush_chooseleaf_type = 0
Dec 04 10:14:04 compute-0 systemd[1]: libpod-7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07.scope: Deactivated successfully.
Dec 04 10:14:04 compute-0 podman[75095]: 2025-12-04 10:14:04.274762744 +0000 UTC m=+0.355268793 container died 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 10:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd-merged.mount: Deactivated successfully.
Dec 04 10:14:04 compute-0 podman[75095]: 2025-12-04 10:14:04.31075138 +0000 UTC m=+0.391257419 container remove 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:14:04 compute-0 systemd[1]: libpod-conmon-7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07.scope: Deactivated successfully.
Dec 04 10:14:04 compute-0 podman[75149]: 2025-12-04 10:14:04.366954837 +0000 UTC m=+0.038284073 container create 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 04 10:14:04 compute-0 systemd[1]: Started libpod-conmon-2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23.scope.
Dec 04 10:14:04 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:04 compute-0 podman[75149]: 2025-12-04 10:14:04.350049366 +0000 UTC m=+0.021378662 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:04 compute-0 podman[75149]: 2025-12-04 10:14:04.454407974 +0000 UTC m=+0.125737250 container init 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:14:04 compute-0 podman[75149]: 2025-12-04 10:14:04.461873935 +0000 UTC m=+0.133203191 container start 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:14:04 compute-0 podman[75149]: 2025-12-04 10:14:04.465490264 +0000 UTC m=+0.136819540 container attach 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:04 compute-0 ceph-mon[75003]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 04 10:14:04 compute-0 ceph-mon[75003]: monmap epoch 1
Dec 04 10:14:04 compute-0 ceph-mon[75003]: fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:04 compute-0 ceph-mon[75003]: last_changed 2025-12-04T10:14:01.294217+0000
Dec 04 10:14:04 compute-0 ceph-mon[75003]: created 2025-12-04T10:14:01.294217+0000
Dec 04 10:14:04 compute-0 ceph-mon[75003]: min_mon_release 20 (tentacle)
Dec 04 10:14:04 compute-0 ceph-mon[75003]: election_strategy: 1
Dec 04 10:14:04 compute-0 ceph-mon[75003]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 04 10:14:04 compute-0 ceph-mon[75003]: fsmap 
Dec 04 10:14:04 compute-0 ceph-mon[75003]: osdmap e1: 0 total, 0 up, 0 in
Dec 04 10:14:04 compute-0 ceph-mon[75003]: mgrmap e1: no daemons active
Dec 04 10:14:04 compute-0 ceph-mon[75003]: from='client.? 192.168.122.100:0/1692268427' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 04 10:14:04 compute-0 ceph-mon[75003]: from='client.? 192.168.122.100:0/1561490116' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 04 10:14:04 compute-0 ceph-mon[75003]: from='client.? 192.168.122.100:0/1561490116' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 04 10:14:04 compute-0 ceph-mon[75003]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:04 compute-0 ceph-mon[75003]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3393911737' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:04 compute-0 systemd[1]: libpod-2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23.scope: Deactivated successfully.
Dec 04 10:14:04 compute-0 podman[75149]: 2025-12-04 10:14:04.670308706 +0000 UTC m=+0.341637972 container died 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5-merged.mount: Deactivated successfully.
Dec 04 10:14:04 compute-0 podman[75149]: 2025-12-04 10:14:04.942247562 +0000 UTC m=+0.613576838 container remove 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 04 10:14:04 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:14:05 compute-0 systemd[1]: libpod-conmon-2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23.scope: Deactivated successfully.
Dec 04 10:14:05 compute-0 ceph-mon[75003]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 04 10:14:05 compute-0 ceph-mon[75003]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 04 10:14:05 compute-0 ceph-mon[75003]: mon.compute-0@0(leader) e1 shutdown
Dec 04 10:14:05 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0[74999]: 2025-12-04T10:14:05.226+0000 7f8431168640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 04 10:14:05 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0[74999]: 2025-12-04T10:14:05.226+0000 7f8431168640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 04 10:14:05 compute-0 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 04 10:14:05 compute-0 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 04 10:14:05 compute-0 podman[75234]: 2025-12-04 10:14:05.310168502 +0000 UTC m=+0.137919257 container died d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593-merged.mount: Deactivated successfully.
Dec 04 10:14:05 compute-0 podman[75234]: 2025-12-04 10:14:05.476483007 +0000 UTC m=+0.304233772 container remove d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:05 compute-0 bash[75234]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0
Dec 04 10:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 04 10:14:05 compute-0 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mon.compute-0.service: Deactivated successfully.
Dec 04 10:14:05 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:14:05 compute-0 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mon.compute-0.service: Consumed 1.113s CPU time.
Dec 04 10:14:05 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:14:05 compute-0 podman[75338]: 2025-12-04 10:14:05.863185673 +0000 UTC m=+0.026869684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:06 compute-0 podman[75338]: 2025-12-04 10:14:06.178159007 +0000 UTC m=+0.341842968 container create 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6b0bbf900070899083a15aeddc410b61723de45c5b8ba83bd59565f9d3ea1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6b0bbf900070899083a15aeddc410b61723de45c5b8ba83bd59565f9d3ea1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6b0bbf900070899083a15aeddc410b61723de45c5b8ba83bd59565f9d3ea1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6b0bbf900070899083a15aeddc410b61723de45c5b8ba83bd59565f9d3ea1f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:06 compute-0 podman[75338]: 2025-12-04 10:14:06.357745946 +0000 UTC m=+0.521429957 container init 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 04 10:14:06 compute-0 podman[75338]: 2025-12-04 10:14:06.368766473 +0000 UTC m=+0.532450434 container start 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 04 10:14:06 compute-0 bash[75338]: 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88
Dec 04 10:14:06 compute-0 systemd[1]: Started Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:14:06 compute-0 ceph-mon[75358]: set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 04 10:14:06 compute-0 ceph-mon[75358]: pidfile_write: ignore empty --pid-file
Dec 04 10:14:06 compute-0 ceph-mon[75358]: load: jerasure load: lrc 
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: RocksDB version: 7.9.2
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Git sha 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: DB SUMMARY
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: DB Session ID:  Y30CWPND84TKXOFWI6NG
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: CURRENT file:  CURRENT
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: IDENTITY file:  IDENTITY
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                         Options.error_if_exists: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                       Options.create_if_missing: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                         Options.paranoid_checks: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                                     Options.env: 0x56349edf6440
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                                Options.info_log: 0x56349f85fe80
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.max_file_opening_threads: 16
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                              Options.statistics: (nil)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                               Options.use_fsync: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                       Options.max_log_file_size: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                         Options.allow_fallocate: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                        Options.use_direct_reads: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:          Options.create_missing_column_families: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                              Options.db_log_dir: 
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                                 Options.wal_dir: 
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                   Options.advise_random_on_open: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                    Options.write_buffer_manager: 0x56349f8aa140
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                            Options.rate_limiter: (nil)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.unordered_write: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                               Options.row_cache: None
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                              Options.wal_filter: None
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.allow_ingest_behind: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.two_write_queues: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.manual_wal_flush: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.wal_compression: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.atomic_flush: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                 Options.log_readahead_size: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.allow_data_in_errors: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.db_host_id: __hostname__
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.max_background_jobs: 2
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.max_background_compactions: -1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.max_subcompactions: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.max_total_wal_size: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                          Options.max_open_files: -1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                          Options.bytes_per_sync: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:       Options.compaction_readahead_size: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.max_background_flushes: -1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Compression algorithms supported:
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         kZSTD supported: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         kXpressCompression supported: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         kBZip2Compression supported: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         kLZ4Compression supported: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         kZlibCompression supported: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         kLZ4HCCompression supported: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         kSnappyCompression supported: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:           Options.merge_operator: 
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:        Options.compaction_filter: None
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56349f8b6a00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56349f89b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:        Options.write_buffer_size: 33554432
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:  Options.max_write_buffer_number: 2
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:          Options.compression: NoCompression
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.num_levels: 7
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bea4932-39ce-4c6c-8b9b-253595ae5108
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843246409136, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843246413387, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843246, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843246413483, "job": 1, "event": "recovery_finished"}
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56349f8c8e00
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: DB pointer 0x56349fa12000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:14:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56349f89b8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 04 10:14:06 compute-0 ceph-mon[75358]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@-1(???) e1 preinit fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@-1(???).mds e1 new map
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-04T10:14:03:532003+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 04 10:14:06 compute-0 ceph-mon[75358]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : last_changed 2025-12-04T10:14:01.294217+0000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : created 2025-12-04T10:14:01.294217+0000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap 
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 04 10:14:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 04 10:14:06 compute-0 podman[75359]: 2025-12-04 10:14:06.486761843 +0000 UTC m=+0.059559859 container create e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: monmap epoch 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:06 compute-0 ceph-mon[75358]: last_changed 2025-12-04T10:14:01.294217+0000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: created 2025-12-04T10:14:01.294217+0000
Dec 04 10:14:06 compute-0 ceph-mon[75358]: min_mon_release 20 (tentacle)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: election_strategy: 1
Dec 04 10:14:06 compute-0 ceph-mon[75358]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 04 10:14:06 compute-0 ceph-mon[75358]: fsmap 
Dec 04 10:14:06 compute-0 ceph-mon[75358]: osdmap e1: 0 total, 0 up, 0 in
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mgrmap e1: no daemons active
Dec 04 10:14:06 compute-0 systemd[1]: Started libpod-conmon-e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df.scope.
Dec 04 10:14:06 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490e23c5b636d3bfadde5f7ba4251a47dbb32e676c599743124f91b0c040f38b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490e23c5b636d3bfadde5f7ba4251a47dbb32e676c599743124f91b0c040f38b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490e23c5b636d3bfadde5f7ba4251a47dbb32e676c599743124f91b0c040f38b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:06 compute-0 podman[75359]: 2025-12-04 10:14:06.472509646 +0000 UTC m=+0.045307682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:06 compute-0 podman[75359]: 2025-12-04 10:14:06.567557098 +0000 UTC m=+0.140355174 container init e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:06 compute-0 podman[75359]: 2025-12-04 10:14:06.579698203 +0000 UTC m=+0.152496219 container start e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:06 compute-0 podman[75359]: 2025-12-04 10:14:06.586060299 +0000 UTC m=+0.158858345 container attach e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 04 10:14:06 compute-0 systemd[1]: libpod-e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df.scope: Deactivated successfully.
Dec 04 10:14:06 compute-0 conmon[75413]: conmon e9d869578e7c95c57feb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df.scope/container/memory.events
Dec 04 10:14:06 compute-0 podman[75359]: 2025-12-04 10:14:06.788444591 +0000 UTC m=+0.361242627 container died e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-490e23c5b636d3bfadde5f7ba4251a47dbb32e676c599743124f91b0c040f38b-merged.mount: Deactivated successfully.
Dec 04 10:14:06 compute-0 podman[75359]: 2025-12-04 10:14:06.846163655 +0000 UTC m=+0.418961691 container remove e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:14:06 compute-0 systemd[1]: libpod-conmon-e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df.scope: Deactivated successfully.
Dec 04 10:14:06 compute-0 podman[75451]: 2025-12-04 10:14:06.928539149 +0000 UTC m=+0.059797726 container create e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:06 compute-0 systemd[1]: Started libpod-conmon-e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d.scope.
Dec 04 10:14:06 compute-0 podman[75451]: 2025-12-04 10:14:06.895197398 +0000 UTC m=+0.026456055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:07 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc49d7e7ee9d61f7d2ed2bfa753d593abc941fa607c48f013ffa1d10faf9a4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc49d7e7ee9d61f7d2ed2bfa753d593abc941fa607c48f013ffa1d10faf9a4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc49d7e7ee9d61f7d2ed2bfa753d593abc941fa607c48f013ffa1d10faf9a4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:07 compute-0 podman[75451]: 2025-12-04 10:14:07.037729955 +0000 UTC m=+0.168988582 container init e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:07 compute-0 podman[75451]: 2025-12-04 10:14:07.049300316 +0000 UTC m=+0.180558863 container start e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 04 10:14:07 compute-0 podman[75451]: 2025-12-04 10:14:07.05272984 +0000 UTC m=+0.183988387 container attach e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 04 10:14:07 compute-0 systemd[1]: libpod-e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d.scope: Deactivated successfully.
Dec 04 10:14:07 compute-0 podman[75451]: 2025-12-04 10:14:07.320436762 +0000 UTC m=+0.451695369 container died e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:14:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcc49d7e7ee9d61f7d2ed2bfa753d593abc941fa607c48f013ffa1d10faf9a4b-merged.mount: Deactivated successfully.
Dec 04 10:14:07 compute-0 podman[75451]: 2025-12-04 10:14:07.367080706 +0000 UTC m=+0.498339253 container remove e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:14:07 compute-0 systemd[1]: libpod-conmon-e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d.scope: Deactivated successfully.
Dec 04 10:14:07 compute-0 systemd[1]: Reloading.
Dec 04 10:14:07 compute-0 systemd-rc-local-generator[75536]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:07 compute-0 systemd-sysv-generator[75540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:08 compute-0 systemd[1]: Reloading.
Dec 04 10:14:08 compute-0 systemd-sysv-generator[75573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:08 compute-0 systemd-rc-local-generator[75570]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:08 compute-0 systemd[1]: Starting Ceph mgr.compute-0.iwufnj for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:14:08 compute-0 podman[75631]: 2025-12-04 10:14:08.707275798 +0000 UTC m=+0.074568915 container create aa9fc7b1d662f69b2a978cfdf463b7d7981b2b6c84d1dea291388aff96f8a8ca (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fb9ea260250f31b964bda5ea2a93d990d10583b09e1c9b2e05d713b716db8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fb9ea260250f31b964bda5ea2a93d990d10583b09e1c9b2e05d713b716db8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fb9ea260250f31b964bda5ea2a93d990d10583b09e1c9b2e05d713b716db8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fb9ea260250f31b964bda5ea2a93d990d10583b09e1c9b2e05d713b716db8f/merged/var/lib/ceph/mgr/ceph-compute-0.iwufnj supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:08 compute-0 podman[75631]: 2025-12-04 10:14:08.676679734 +0000 UTC m=+0.043972901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:08 compute-0 podman[75631]: 2025-12-04 10:14:08.793598808 +0000 UTC m=+0.160891945 container init aa9fc7b1d662f69b2a978cfdf463b7d7981b2b6c84d1dea291388aff96f8a8ca (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:08 compute-0 podman[75631]: 2025-12-04 10:14:08.811333219 +0000 UTC m=+0.178626296 container start aa9fc7b1d662f69b2a978cfdf463b7d7981b2b6c84d1dea291388aff96f8a8ca (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:14:08 compute-0 bash[75631]: aa9fc7b1d662f69b2a978cfdf463b7d7981b2b6c84d1dea291388aff96f8a8ca
Dec 04 10:14:08 compute-0 systemd[1]: Started Ceph mgr.compute-0.iwufnj for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:14:08 compute-0 ceph-mgr[75651]: set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:14:08 compute-0 ceph-mgr[75651]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 04 10:14:08 compute-0 ceph-mgr[75651]: pidfile_write: ignore empty --pid-file
Dec 04 10:14:08 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'alerts'
Dec 04 10:14:08 compute-0 podman[75652]: 2025-12-04 10:14:08.949144131 +0000 UTC m=+0.073449947 container create 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:08 compute-0 systemd[1]: Started libpod-conmon-21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24.scope.
Dec 04 10:14:09 compute-0 podman[75652]: 2025-12-04 10:14:08.919561462 +0000 UTC m=+0.043867348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:09 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cc44e2fe907efe76928b07c33abffe21d31091f697c85f9be8160700e9b675/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cc44e2fe907efe76928b07c33abffe21d31091f697c85f9be8160700e9b675/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cc44e2fe907efe76928b07c33abffe21d31091f697c85f9be8160700e9b675/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:09 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'balancer'
Dec 04 10:14:09 compute-0 podman[75652]: 2025-12-04 10:14:09.055068628 +0000 UTC m=+0.179374454 container init 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:09 compute-0 podman[75652]: 2025-12-04 10:14:09.068743091 +0000 UTC m=+0.193048877 container start 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:09 compute-0 podman[75652]: 2025-12-04 10:14:09.072695327 +0000 UTC m=+0.197001163 container attach 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:09 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'cephadm'
Dec 04 10:14:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 04 10:14:09 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641450880' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:14:09 compute-0 laughing_banach[75689]: 
Dec 04 10:14:09 compute-0 laughing_banach[75689]: {
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "health": {
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "status": "HEALTH_OK",
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "checks": {},
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "mutes": []
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     },
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "election_epoch": 5,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "quorum": [
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         0
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     ],
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "quorum_names": [
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "compute-0"
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     ],
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "quorum_age": 2,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "monmap": {
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "epoch": 1,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "min_mon_release_name": "tentacle",
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "num_mons": 1
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     },
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "osdmap": {
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "epoch": 1,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "num_osds": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "num_up_osds": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "osd_up_since": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "num_in_osds": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "osd_in_since": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "num_remapped_pgs": 0
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     },
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "pgmap": {
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "pgs_by_state": [],
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "num_pgs": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "num_pools": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "num_objects": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "data_bytes": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "bytes_used": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "bytes_avail": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "bytes_total": 0
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     },
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "fsmap": {
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "epoch": 1,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "btime": "2025-12-04T10:14:03:532003+0000",
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "by_rank": [],
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "up:standby": 0
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     },
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "mgrmap": {
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "available": false,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "num_standbys": 0,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "modules": [
Dec 04 10:14:09 compute-0 laughing_banach[75689]:             "iostat",
Dec 04 10:14:09 compute-0 laughing_banach[75689]:             "nfs"
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         ],
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "services": {}
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     },
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "servicemap": {
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "epoch": 1,
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "modified": "2025-12-04T10:14:03.534445+0000",
Dec 04 10:14:09 compute-0 laughing_banach[75689]:         "services": {}
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     },
Dec 04 10:14:09 compute-0 laughing_banach[75689]:     "progress_events": {}
Dec 04 10:14:09 compute-0 laughing_banach[75689]: }
Dec 04 10:14:09 compute-0 systemd[1]: libpod-21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24.scope: Deactivated successfully.
Dec 04 10:14:09 compute-0 podman[75652]: 2025-12-04 10:14:09.303057831 +0000 UTC m=+0.427363637 container died 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3cc44e2fe907efe76928b07c33abffe21d31091f697c85f9be8160700e9b675-merged.mount: Deactivated successfully.
Dec 04 10:14:09 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/641450880' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:14:09 compute-0 podman[75652]: 2025-12-04 10:14:09.347219905 +0000 UTC m=+0.471525711 container remove 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:09 compute-0 systemd[1]: libpod-conmon-21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24.scope: Deactivated successfully.
Dec 04 10:14:09 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'crash'
Dec 04 10:14:10 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'dashboard'
Dec 04 10:14:10 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'devicehealth'
Dec 04 10:14:10 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'diskprediction_local'
Dec 04 10:14:10 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 04 10:14:10 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 04 10:14:10 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]:   from numpy import show_config as show_numpy_config
Dec 04 10:14:10 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'influx'
Dec 04 10:14:11 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'insights'
Dec 04 10:14:11 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'iostat'
Dec 04 10:14:11 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'k8sevents'
Dec 04 10:14:11 compute-0 podman[75739]: 2025-12-04 10:14:11.42631518 +0000 UTC m=+0.053541244 container create ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:11 compute-0 systemd[1]: Started libpod-conmon-ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59.scope.
Dec 04 10:14:11 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:11 compute-0 podman[75739]: 2025-12-04 10:14:11.400919252 +0000 UTC m=+0.028145366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3c2637c062b5ce5675f43076ddbfdfdc909e1413e9853cebc42335f91286d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3c2637c062b5ce5675f43076ddbfdfdc909e1413e9853cebc42335f91286d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3c2637c062b5ce5675f43076ddbfdfdc909e1413e9853cebc42335f91286d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:11 compute-0 podman[75739]: 2025-12-04 10:14:11.512962598 +0000 UTC m=+0.140188672 container init ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:11 compute-0 podman[75739]: 2025-12-04 10:14:11.522911421 +0000 UTC m=+0.150137495 container start ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:11 compute-0 podman[75739]: 2025-12-04 10:14:11.533022726 +0000 UTC m=+0.160248790 container attach ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:14:11 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'localpool'
Dec 04 10:14:11 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'mds_autoscaler'
Dec 04 10:14:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 04 10:14:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/927057361' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]: 
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]: {
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "health": {
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "status": "HEALTH_OK",
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "checks": {},
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "mutes": []
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     },
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "election_epoch": 5,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "quorum": [
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         0
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     ],
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "quorum_names": [
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "compute-0"
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     ],
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "quorum_age": 5,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "monmap": {
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "epoch": 1,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "min_mon_release_name": "tentacle",
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "num_mons": 1
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     },
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "osdmap": {
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "epoch": 1,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "num_osds": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "num_up_osds": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "osd_up_since": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "num_in_osds": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "osd_in_since": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "num_remapped_pgs": 0
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     },
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "pgmap": {
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "pgs_by_state": [],
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "num_pgs": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "num_pools": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "num_objects": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "data_bytes": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "bytes_used": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "bytes_avail": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "bytes_total": 0
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     },
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "fsmap": {
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "epoch": 1,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "btime": "2025-12-04T10:14:03:532003+0000",
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "by_rank": [],
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "up:standby": 0
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     },
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "mgrmap": {
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "available": false,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "num_standbys": 0,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "modules": [
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:             "iostat",
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:             "nfs"
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         ],
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "services": {}
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     },
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "servicemap": {
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "epoch": 1,
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "modified": "2025-12-04T10:14:03.534445+0000",
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:         "services": {}
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     },
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]:     "progress_events": {}
Dec 04 10:14:11 compute-0 suspicious_maxwell[75755]: }
Dec 04 10:14:11 compute-0 systemd[1]: libpod-ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59.scope: Deactivated successfully.
Dec 04 10:14:11 compute-0 podman[75739]: 2025-12-04 10:14:11.730417352 +0000 UTC m=+0.357643416 container died ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c3c2637c062b5ce5675f43076ddbfdfdc909e1413e9853cebc42335f91286d9-merged.mount: Deactivated successfully.
Dec 04 10:14:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/927057361' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:14:11 compute-0 podman[75739]: 2025-12-04 10:14:11.768016249 +0000 UTC m=+0.395242313 container remove ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:14:11 compute-0 systemd[1]: libpod-conmon-ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59.scope: Deactivated successfully.
Dec 04 10:14:11 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'mirroring'
Dec 04 10:14:12 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'nfs'
Dec 04 10:14:12 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'orchestrator'
Dec 04 10:14:12 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'osd_perf_query'
Dec 04 10:14:12 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'osd_support'
Dec 04 10:14:12 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'pg_autoscaler'
Dec 04 10:14:12 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'progress'
Dec 04 10:14:12 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'prometheus'
Dec 04 10:14:12 compute-0 sshd-session[75794]: Invalid user redmine from 74.249.218.27 port 36504
Dec 04 10:14:12 compute-0 sshd-session[75794]: Received disconnect from 74.249.218.27 port 36504:11: Bye Bye [preauth]
Dec 04 10:14:12 compute-0 sshd-session[75794]: Disconnected from invalid user redmine 74.249.218.27 port 36504 [preauth]
Dec 04 10:14:13 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'rbd_support'
Dec 04 10:14:13 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'rgw'
Dec 04 10:14:13 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'rook'
Dec 04 10:14:13 compute-0 podman[75796]: 2025-12-04 10:14:13.815399267 +0000 UTC m=+0.024399412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:13 compute-0 podman[75796]: 2025-12-04 10:14:13.981493499 +0000 UTC m=+0.190493544 container create 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 04 10:14:14 compute-0 systemd[1]: Started libpod-conmon-7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e.scope.
Dec 04 10:14:14 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c422c36129e4a94e7b9fb66fea28e8d189e891abbb50ddf530a8c115d8159ef4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c422c36129e4a94e7b9fb66fea28e8d189e891abbb50ddf530a8c115d8159ef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c422c36129e4a94e7b9fb66fea28e8d189e891abbb50ddf530a8c115d8159ef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:14 compute-0 podman[75796]: 2025-12-04 10:14:14.071346927 +0000 UTC m=+0.280347052 container init 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec 04 10:14:14 compute-0 podman[75796]: 2025-12-04 10:14:14.077877914 +0000 UTC m=+0.286877959 container start 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:14 compute-0 podman[75796]: 2025-12-04 10:14:14.081830505 +0000 UTC m=+0.290830570 container attach 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:14 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'selftest'
Dec 04 10:14:14 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'smb'
Dec 04 10:14:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 04 10:14:14 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421471824' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]: 
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]: {
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "health": {
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "status": "HEALTH_OK",
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "checks": {},
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "mutes": []
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     },
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "election_epoch": 5,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "quorum": [
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         0
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     ],
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "quorum_names": [
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "compute-0"
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     ],
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "quorum_age": 7,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "monmap": {
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "epoch": 1,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "min_mon_release_name": "tentacle",
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "num_mons": 1
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     },
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "osdmap": {
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "epoch": 1,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "num_osds": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "num_up_osds": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "osd_up_since": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "num_in_osds": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "osd_in_since": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "num_remapped_pgs": 0
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     },
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "pgmap": {
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "pgs_by_state": [],
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "num_pgs": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "num_pools": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "num_objects": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "data_bytes": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "bytes_used": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "bytes_avail": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "bytes_total": 0
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     },
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "fsmap": {
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "epoch": 1,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "btime": "2025-12-04T10:14:03:532003+0000",
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "by_rank": [],
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "up:standby": 0
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     },
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "mgrmap": {
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "available": false,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "num_standbys": 0,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "modules": [
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:             "iostat",
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:             "nfs"
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         ],
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "services": {}
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     },
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "servicemap": {
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "epoch": 1,
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "modified": "2025-12-04T10:14:03.534445+0000",
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:         "services": {}
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     },
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]:     "progress_events": {}
Dec 04 10:14:14 compute-0 peaceful_diffie[75812]: }
Dec 04 10:14:14 compute-0 systemd[1]: libpod-7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e.scope: Deactivated successfully.
Dec 04 10:14:14 compute-0 podman[75796]: 2025-12-04 10:14:14.301189337 +0000 UTC m=+0.510189382 container died 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c422c36129e4a94e7b9fb66fea28e8d189e891abbb50ddf530a8c115d8159ef4-merged.mount: Deactivated successfully.
Dec 04 10:14:14 compute-0 podman[75796]: 2025-12-04 10:14:14.334530167 +0000 UTC m=+0.543530202 container remove 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 04 10:14:14 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2421471824' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:14:14 compute-0 systemd[1]: libpod-conmon-7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e.scope: Deactivated successfully.
Dec 04 10:14:14 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'snap_schedule'
Dec 04 10:14:14 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'stats'
Dec 04 10:14:14 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'status'
Dec 04 10:14:14 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'telegraf'
Dec 04 10:14:14 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'telemetry'
Dec 04 10:14:14 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'test_orchestrator'
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'volumes'
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: ms_deliver_dispatch: unhandled message 0x5563bf9e7860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.iwufnj
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr handle_mgr_map Activating!
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.iwufnj(active, starting, since 0.011833s)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr handle_mgr_map I am now activating
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mds metadata"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e1 all = 1
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} v 0)
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: balancer
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [balancer INFO root] Starting
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: crash
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:14:15
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Manager daemon compute-0.iwufnj is now available
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [balancer INFO root] No pools available
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: devicehealth
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [devicehealth INFO root] Starting
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: iostat
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: nfs
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: orchestrator
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: pg_autoscaler
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: progress
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [progress INFO root] Loading...
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [progress INFO root] No stored events to load
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [progress INFO root] Loaded [] historic events
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [progress INFO root] Loaded OSDMap, ready.
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support INFO root] recovery thread starting
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support INFO root] starting setup
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: rbd_support
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: status
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} v 0)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: telemetry
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support INFO root] PerfHandler: starting
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TaskHandler: starting
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} v 0)
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: [rbd_support INFO root] setup complete
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 04 10:14:15 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: volumes
Dec 04 10:14:15 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:15 compute-0 ceph-mon[75358]: Activating manager daemon compute-0.iwufnj
Dec 04 10:14:15 compute-0 ceph-mon[75358]: mgrmap e2: compute-0.iwufnj(active, starting, since 0.011833s)
Dec 04 10:14:15 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mds metadata"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: Manager daemon compute-0.iwufnj is now available
Dec 04 10:14:15 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} : dispatch
Dec 04 10:14:15 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:15 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:16 compute-0 podman[75928]: 2025-12-04 10:14:16.407885082 +0000 UTC m=+0.050173034 container create 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:14:16 compute-0 systemd[1]: Started libpod-conmon-1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e.scope.
Dec 04 10:14:16 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:16 compute-0 podman[75928]: 2025-12-04 10:14:16.390356727 +0000 UTC m=+0.032644719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a515a65ed79b38cdb8d2ce16687729ba1aa922264b09926e9d52c84819fc30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a515a65ed79b38cdb8d2ce16687729ba1aa922264b09926e9d52c84819fc30/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a515a65ed79b38cdb8d2ce16687729ba1aa922264b09926e9d52c84819fc30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:16 compute-0 podman[75928]: 2025-12-04 10:14:16.506040591 +0000 UTC m=+0.148328643 container init 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:16 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.iwufnj(active, since 1.03502s)
Dec 04 10:14:16 compute-0 podman[75928]: 2025-12-04 10:14:16.517899244 +0000 UTC m=+0.160187196 container start 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:14:16 compute-0 podman[75928]: 2025-12-04 10:14:16.522339684 +0000 UTC m=+0.164627666 container attach 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:16 compute-0 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:16 compute-0 ceph-mon[75358]: mgrmap e3: compute-0.iwufnj(active, since 1.03502s)
Dec 04 10:14:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 04 10:14:17 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286944483' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:14:17 compute-0 quirky_feynman[75945]: 
Dec 04 10:14:17 compute-0 quirky_feynman[75945]: {
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "health": {
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "status": "HEALTH_OK",
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "checks": {},
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "mutes": []
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     },
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "election_epoch": 5,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "quorum": [
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         0
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     ],
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "quorum_names": [
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "compute-0"
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     ],
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "quorum_age": 10,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "monmap": {
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "epoch": 1,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "min_mon_release_name": "tentacle",
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "num_mons": 1
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     },
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "osdmap": {
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "epoch": 1,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "num_osds": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "num_up_osds": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "osd_up_since": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "num_in_osds": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "osd_in_since": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "num_remapped_pgs": 0
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     },
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "pgmap": {
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "pgs_by_state": [],
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "num_pgs": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "num_pools": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "num_objects": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "data_bytes": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "bytes_used": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "bytes_avail": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "bytes_total": 0
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     },
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "fsmap": {
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "epoch": 1,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "btime": "2025-12-04T10:14:03:532003+0000",
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "by_rank": [],
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "up:standby": 0
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     },
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "mgrmap": {
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "available": true,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "num_standbys": 0,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "modules": [
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:             "iostat",
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:             "nfs"
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         ],
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "services": {}
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     },
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "servicemap": {
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "epoch": 1,
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "modified": "2025-12-04T10:14:03.534445+0000",
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:         "services": {}
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     },
Dec 04 10:14:17 compute-0 quirky_feynman[75945]:     "progress_events": {}
Dec 04 10:14:17 compute-0 quirky_feynman[75945]: }
Dec 04 10:14:17 compute-0 systemd[1]: libpod-1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e.scope: Deactivated successfully.
Dec 04 10:14:17 compute-0 podman[75971]: 2025-12-04 10:14:17.074661703 +0000 UTC m=+0.022086159 container died 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-74a515a65ed79b38cdb8d2ce16687729ba1aa922264b09926e9d52c84819fc30-merged.mount: Deactivated successfully.
Dec 04 10:14:17 compute-0 podman[75971]: 2025-12-04 10:14:17.119834486 +0000 UTC m=+0.067258932 container remove 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:14:17 compute-0 systemd[1]: libpod-conmon-1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e.scope: Deactivated successfully.
Dec 04 10:14:17 compute-0 podman[75986]: 2025-12-04 10:14:17.219213786 +0000 UTC m=+0.060213505 container create 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:17 compute-0 systemd[1]: Started libpod-conmon-45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0.scope.
Dec 04 10:14:17 compute-0 podman[75986]: 2025-12-04 10:14:17.192655578 +0000 UTC m=+0.033655367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:17 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:17 compute-0 podman[75986]: 2025-12-04 10:14:17.315571572 +0000 UTC m=+0.156571291 container init 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:17 compute-0 podman[75986]: 2025-12-04 10:14:17.322450596 +0000 UTC m=+0.163450325 container start 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:14:17 compute-0 podman[75986]: 2025-12-04 10:14:17.327009418 +0000 UTC m=+0.168009147 container attach 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:14:17 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:17 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:17 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.iwufnj(active, since 2s)
Dec 04 10:14:17 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4286944483' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:14:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 04 10:14:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2335252299' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 04 10:14:17 compute-0 zen_hertz[76002]: 
Dec 04 10:14:17 compute-0 zen_hertz[76002]: [global]
Dec 04 10:14:17 compute-0 zen_hertz[76002]:         fsid = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:17 compute-0 zen_hertz[76002]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 04 10:14:17 compute-0 zen_hertz[76002]:         osd_crush_chooseleaf_type = 0
Dec 04 10:14:17 compute-0 systemd[1]: libpod-45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0.scope: Deactivated successfully.
Dec 04 10:14:17 compute-0 podman[75986]: 2025-12-04 10:14:17.836942712 +0000 UTC m=+0.677942451 container died 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 04 10:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a-merged.mount: Deactivated successfully.
Dec 04 10:14:17 compute-0 podman[75986]: 2025-12-04 10:14:17.874092562 +0000 UTC m=+0.715092281 container remove 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:17 compute-0 systemd[1]: libpod-conmon-45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0.scope: Deactivated successfully.
Dec 04 10:14:17 compute-0 podman[76040]: 2025-12-04 10:14:17.94453378 +0000 UTC m=+0.052008547 container create 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 04 10:14:17 compute-0 systemd[1]: Started libpod-conmon-2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba.scope.
Dec 04 10:14:18 compute-0 podman[76040]: 2025-12-04 10:14:17.916297272 +0000 UTC m=+0.023772089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:18 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15c1abc30a599b809718469c2d96a2ffaab4de3e839b108d2dd38913e3394b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15c1abc30a599b809718469c2d96a2ffaab4de3e839b108d2dd38913e3394b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15c1abc30a599b809718469c2d96a2ffaab4de3e839b108d2dd38913e3394b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:18 compute-0 podman[76040]: 2025-12-04 10:14:18.031346714 +0000 UTC m=+0.138821461 container init 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 04 10:14:18 compute-0 podman[76040]: 2025-12-04 10:14:18.036131 +0000 UTC m=+0.143605727 container start 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:18 compute-0 podman[76040]: 2025-12-04 10:14:18.039225586 +0000 UTC m=+0.146700313 container attach 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:14:18 compute-0 ceph-mon[75358]: mgrmap e4: compute-0.iwufnj(active, since 2s)
Dec 04 10:14:18 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2335252299' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 04 10:14:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 04 10:14:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2477013307' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:19 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2477013307' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 04 10:14:19 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2477013307' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  1: '-n'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  2: 'mgr.compute-0.iwufnj'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  3: '-f'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  4: '--setuser'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  5: 'ceph'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  6: '--setgroup'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  7: 'ceph'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  8: '--default-log-to-file=false'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  9: '--default-log-to-journald=true'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr respawn  exe_path /proc/self/exe
Dec 04 10:14:19 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.iwufnj(active, since 4s)
Dec 04 10:14:19 compute-0 systemd[1]: libpod-2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba.scope: Deactivated successfully.
Dec 04 10:14:19 compute-0 podman[76040]: 2025-12-04 10:14:19.62118107 +0000 UTC m=+1.728655837 container died 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a15c1abc30a599b809718469c2d96a2ffaab4de3e839b108d2dd38913e3394b-merged.mount: Deactivated successfully.
Dec 04 10:14:19 compute-0 podman[76040]: 2025-12-04 10:14:19.667546616 +0000 UTC m=+1.775021363 container remove 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:19 compute-0 systemd[1]: libpod-conmon-2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba.scope: Deactivated successfully.
Dec 04 10:14:19 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: ignoring --setuser ceph since I am not root
Dec 04 10:14:19 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: ignoring --setgroup ceph since I am not root
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: pidfile_write: ignore empty --pid-file
Dec 04 10:14:19 compute-0 podman[76094]: 2025-12-04 10:14:19.731481907 +0000 UTC m=+0.045020252 container create 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'alerts'
Dec 04 10:14:19 compute-0 systemd[1]: Started libpod-conmon-807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41.scope.
Dec 04 10:14:19 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b3c6d7408f9dfd8b0fc79be157c05ad99335f2b47b78abb5d5b13ef639fa18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b3c6d7408f9dfd8b0fc79be157c05ad99335f2b47b78abb5d5b13ef639fa18/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b3c6d7408f9dfd8b0fc79be157c05ad99335f2b47b78abb5d5b13ef639fa18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:19 compute-0 podman[76094]: 2025-12-04 10:14:19.786749893 +0000 UTC m=+0.100288258 container init 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 04 10:14:19 compute-0 podman[76094]: 2025-12-04 10:14:19.799163047 +0000 UTC m=+0.112701392 container start 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:19 compute-0 podman[76094]: 2025-12-04 10:14:19.802857082 +0000 UTC m=+0.116395507 container attach 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:19 compute-0 podman[76094]: 2025-12-04 10:14:19.71332156 +0000 UTC m=+0.026859935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'balancer'
Dec 04 10:14:19 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'cephadm'
Dec 04 10:14:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 04 10:14:20 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449980278' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 04 10:14:20 compute-0 magical_lovelace[76130]: {
Dec 04 10:14:20 compute-0 magical_lovelace[76130]:     "epoch": 5,
Dec 04 10:14:20 compute-0 magical_lovelace[76130]:     "available": true,
Dec 04 10:14:20 compute-0 magical_lovelace[76130]:     "active_name": "compute-0.iwufnj",
Dec 04 10:14:20 compute-0 magical_lovelace[76130]:     "num_standby": 0
Dec 04 10:14:20 compute-0 magical_lovelace[76130]: }
Dec 04 10:14:20 compute-0 systemd[1]: libpod-807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41.scope: Deactivated successfully.
Dec 04 10:14:20 compute-0 podman[76094]: 2025-12-04 10:14:20.271166218 +0000 UTC m=+0.584704613 container died 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:14:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7b3c6d7408f9dfd8b0fc79be157c05ad99335f2b47b78abb5d5b13ef639fa18-merged.mount: Deactivated successfully.
Dec 04 10:14:20 compute-0 podman[76094]: 2025-12-04 10:14:20.312413061 +0000 UTC m=+0.625951406 container remove 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 04 10:14:20 compute-0 systemd[1]: libpod-conmon-807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41.scope: Deactivated successfully.
Dec 04 10:14:20 compute-0 podman[76174]: 2025-12-04 10:14:20.375864163 +0000 UTC m=+0.041831373 container create ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:20 compute-0 systemd[1]: Started libpod-conmon-ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1.scope.
Dec 04 10:14:20 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d93b9e7ee3a3b58b518df108df5f75179c2bd53962eb9ccdbd1e2941340e04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d93b9e7ee3a3b58b518df108df5f75179c2bd53962eb9ccdbd1e2941340e04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d93b9e7ee3a3b58b518df108df5f75179c2bd53962eb9ccdbd1e2941340e04/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:20 compute-0 podman[76174]: 2025-12-04 10:14:20.355299923 +0000 UTC m=+0.021267183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:20 compute-0 podman[76174]: 2025-12-04 10:14:20.460200293 +0000 UTC m=+0.126167533 container init ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:20 compute-0 podman[76174]: 2025-12-04 10:14:20.467982673 +0000 UTC m=+0.133949893 container start ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:14:20 compute-0 podman[76174]: 2025-12-04 10:14:20.471635789 +0000 UTC m=+0.137603039 container attach ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:20 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2477013307' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 04 10:14:20 compute-0 ceph-mon[75358]: mgrmap e5: compute-0.iwufnj(active, since 4s)
Dec 04 10:14:20 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/449980278' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 04 10:14:20 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'crash'
Dec 04 10:14:20 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'dashboard'
Dec 04 10:14:21 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'devicehealth'
Dec 04 10:14:21 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'diskprediction_local'
Dec 04 10:14:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 04 10:14:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 04 10:14:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]:   from numpy import show_config as show_numpy_config
Dec 04 10:14:21 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'influx'
Dec 04 10:14:21 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'insights'
Dec 04 10:14:21 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'iostat'
Dec 04 10:14:21 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'k8sevents'
Dec 04 10:14:22 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'localpool'
Dec 04 10:14:22 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'mds_autoscaler'
Dec 04 10:14:22 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'mirroring'
Dec 04 10:14:22 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'nfs'
Dec 04 10:14:22 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'orchestrator'
Dec 04 10:14:23 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'osd_perf_query'
Dec 04 10:14:23 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'osd_support'
Dec 04 10:14:23 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'pg_autoscaler'
Dec 04 10:14:23 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'progress'
Dec 04 10:14:23 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'prometheus'
Dec 04 10:14:23 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'rbd_support'
Dec 04 10:14:23 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'rgw'
Dec 04 10:14:24 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'rook'
Dec 04 10:14:24 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'selftest'
Dec 04 10:14:24 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'smb'
Dec 04 10:14:25 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'snap_schedule'
Dec 04 10:14:25 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'stats'
Dec 04 10:14:25 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'status'
Dec 04 10:14:25 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'telegraf'
Dec 04 10:14:25 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'telemetry'
Dec 04 10:14:25 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'test_orchestrator'
Dec 04 10:14:25 compute-0 ceph-mgr[75651]: mgr[py] Loading python module 'volumes'
Dec 04 10:14:25 compute-0 sshd-session[76219]: Invalid user admin123 from 217.154.62.22 port 35588
Dec 04 10:14:26 compute-0 sshd-session[76219]: Received disconnect from 217.154.62.22 port 35588:11: Bye Bye [preauth]
Dec 04 10:14:26 compute-0 sshd-session[76219]: Disconnected from invalid user admin123 217.154.62.22 port 35588 [preauth]
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Active manager daemon compute-0.iwufnj restarted
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.iwufnj
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: ms_deliver_dispatch: unhandled message 0x55fe4a77a000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: mgr handle_mgr_map Activating!
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: mgr handle_mgr_map I am now activating
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.iwufnj(active, starting, since 0.542923s)
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} v 0)
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mds metadata"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e1 all = 1
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: balancer
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Starting
Dec 04 10:14:26 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Manager daemon compute-0.iwufnj is now available
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:14:26
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:14:26 compute-0 ceph-mgr[75651]: [balancer INFO root] No pools available
Dec 04 10:14:26 compute-0 ceph-mon[75358]: Active manager daemon compute-0.iwufnj restarted
Dec 04 10:14:26 compute-0 ceph-mon[75358]: Activating manager daemon compute-0.iwufnj
Dec 04 10:14:26 compute-0 ceph-mon[75358]: osdmap e2: 0 total, 0 up, 0 in
Dec 04 10:14:26 compute-0 ceph-mon[75358]: mgrmap e6: compute-0.iwufnj(active, starting, since 0.542923s)
Dec 04 10:14:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mds metadata"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata"} : dispatch
Dec 04 10:14:26 compute-0 ceph-mon[75358]: Manager daemon compute-0.iwufnj is now available
Dec 04 10:14:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Dec 04 10:14:27 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Dec 04 10:14:27 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.iwufnj(active, since 1.71936s)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 04 10:14:27 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 04 10:14:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 04 10:14:27 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 04 10:14:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 04 10:14:27 compute-0 distracted_booth[76195]: {
Dec 04 10:14:27 compute-0 distracted_booth[76195]:     "mgrmap_epoch": 7,
Dec 04 10:14:27 compute-0 distracted_booth[76195]:     "initialized": true
Dec 04 10:14:27 compute-0 distracted_booth[76195]: }
Dec 04 10:14:27 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: cephadm
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: crash
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: devicehealth
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [devicehealth INFO root] Starting
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: iostat
Dec 04 10:14:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 04 10:14:27 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: nfs
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: orchestrator
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: pg_autoscaler
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: progress
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [progress INFO root] Loading...
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [progress INFO root] No stored events to load
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [progress INFO root] Loaded [] historic events
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [progress INFO root] Loaded OSDMap, ready.
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 04 10:14:27 compute-0 systemd[1]: libpod-ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1.scope: Deactivated successfully.
Dec 04 10:14:27 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:27 compute-0 podman[76174]: 2025-12-04 10:14:27.87285493 +0000 UTC m=+7.538822230 container died ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] recovery thread starting
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] starting setup
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: rbd_support
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: status
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: telemetry
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 04 10:14:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} v 0)
Dec 04 10:14:27 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} : dispatch
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] PerfHandler: starting
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TaskHandler: starting
Dec 04 10:14:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} v 0)
Dec 04 10:14:27 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} : dispatch
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] setup complete
Dec 04 10:14:27 compute-0 ceph-mgr[75651]: mgr load Constructed class from module: volumes
Dec 04 10:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-52d93b9e7ee3a3b58b518df108df5f75179c2bd53962eb9ccdbd1e2941340e04-merged.mount: Deactivated successfully.
Dec 04 10:14:27 compute-0 podman[76174]: 2025-12-04 10:14:27.929295596 +0000 UTC m=+7.595262856 container remove ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:27 compute-0 systemd[1]: libpod-conmon-ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1.scope: Deactivated successfully.
Dec 04 10:14:28 compute-0 podman[76344]: 2025-12-04 10:14:28.00447392 +0000 UTC m=+0.047688940 container create 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:14:28 compute-0 systemd[1]: Started libpod-conmon-0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b.scope.
Dec 04 10:14:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:28 compute-0 podman[76344]: 2025-12-04 10:14:27.98613116 +0000 UTC m=+0.029346210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e1f04e3d542322020ff86477fd667b517999d5e6cf048aedf003700840bea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e1f04e3d542322020ff86477fd667b517999d5e6cf048aedf003700840bea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e1f04e3d542322020ff86477fd667b517999d5e6cf048aedf003700840bea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:28 compute-0 podman[76344]: 2025-12-04 10:14:28.092301713 +0000 UTC m=+0.135516753 container init 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:14:28 compute-0 podman[76344]: 2025-12-04 10:14:28.097647909 +0000 UTC m=+0.140862929 container start 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:28 compute-0 podman[76344]: 2025-12-04 10:14:28.101157352 +0000 UTC m=+0.144372382 container attach 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 04 10:14:28 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Dec 04 10:14:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2352588894' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 04 10:14:28 compute-0 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:28] ENGINE Bus STARTING
Dec 04 10:14:28 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:28] ENGINE Bus STARTING
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:28 compute-0 ceph-mon[75358]: mgrmap e7: compute-0.iwufnj(active, since 1.71936s)
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:28 compute-0 ceph-mon[75358]: Found migration_current of "None". Setting to last migration.
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} : dispatch
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} : dispatch
Dec 04 10:14:28 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2352588894' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 04 10:14:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2352588894' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 04 10:14:28 compute-0 unruffled_poitras[76360]: module 'orchestrator' is already enabled (always-on)
Dec 04 10:14:28 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.iwufnj(active, since 2s)
Dec 04 10:14:28 compute-0 systemd[1]: libpod-0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b.scope: Deactivated successfully.
Dec 04 10:14:28 compute-0 podman[76344]: 2025-12-04 10:14:28.878435342 +0000 UTC m=+0.921650362 container died 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 04 10:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee6e1f04e3d542322020ff86477fd667b517999d5e6cf048aedf003700840bea-merged.mount: Deactivated successfully.
Dec 04 10:14:28 compute-0 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:28] ENGINE Serving on https://192.168.122.100:7150
Dec 04 10:14:28 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:28] ENGINE Serving on https://192.168.122.100:7150
Dec 04 10:14:28 compute-0 podman[76344]: 2025-12-04 10:14:28.924715976 +0000 UTC m=+0.967930996 container remove 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 10:14:28 compute-0 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:28] ENGINE Client ('192.168.122.100', 48252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 04 10:14:28 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:28] ENGINE Client ('192.168.122.100', 48252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 04 10:14:28 compute-0 systemd[1]: libpod-conmon-0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b.scope: Deactivated successfully.
Dec 04 10:14:29 compute-0 podman[76420]: 2025-12-04 10:14:29.010726035 +0000 UTC m=+0.056102151 container create 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:29 compute-0 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:29] ENGINE Serving on http://192.168.122.100:8765
Dec 04 10:14:29 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:29] ENGINE Serving on http://192.168.122.100:8765
Dec 04 10:14:29 compute-0 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:29] ENGINE Bus STARTED
Dec 04 10:14:29 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:29] ENGINE Bus STARTED
Dec 04 10:14:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 04 10:14:29 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:29 compute-0 systemd[1]: Started libpod-conmon-09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10.scope.
Dec 04 10:14:29 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc91711e8e0020140a078fb0da0d6f753c79479036aebaf106ac9c54c1795402/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc91711e8e0020140a078fb0da0d6f753c79479036aebaf106ac9c54c1795402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc91711e8e0020140a078fb0da0d6f753c79479036aebaf106ac9c54c1795402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:29 compute-0 podman[76420]: 2025-12-04 10:14:28.989205667 +0000 UTC m=+0.034581763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:29 compute-0 podman[76420]: 2025-12-04 10:14:29.097128361 +0000 UTC m=+0.142504457 container init 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:29 compute-0 podman[76420]: 2025-12-04 10:14:29.102597229 +0000 UTC m=+0.147973335 container start 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:29 compute-0 podman[76420]: 2025-12-04 10:14:29.107421787 +0000 UTC m=+0.152797893 container attach 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:29 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 04 10:14:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 04 10:14:29 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:29 compute-0 systemd[1]: libpod-09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10.scope: Deactivated successfully.
Dec 04 10:14:29 compute-0 podman[76420]: 2025-12-04 10:14:29.550251553 +0000 UTC m=+0.595627659 container died 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc91711e8e0020140a078fb0da0d6f753c79479036aebaf106ac9c54c1795402-merged.mount: Deactivated successfully.
Dec 04 10:14:29 compute-0 podman[76420]: 2025-12-04 10:14:29.589759304 +0000 UTC m=+0.635135380 container remove 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:29 compute-0 systemd[1]: libpod-conmon-09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10.scope: Deactivated successfully.
Dec 04 10:14:29 compute-0 podman[76473]: 2025-12-04 10:14:29.642901202 +0000 UTC m=+0.037413526 container create 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:29 compute-0 systemd[1]: Started libpod-conmon-8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5.scope.
Dec 04 10:14:29 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4507dcd3d3055aed2c5e42582ff4b72cb468e47f7f44a57b5e3cb637d596bf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4507dcd3d3055aed2c5e42582ff4b72cb468e47f7f44a57b5e3cb637d596bf8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4507dcd3d3055aed2c5e42582ff4b72cb468e47f7f44a57b5e3cb637d596bf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:29 compute-0 podman[76473]: 2025-12-04 10:14:29.625803354 +0000 UTC m=+0.020315708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:29 compute-0 podman[76473]: 2025-12-04 10:14:29.730701873 +0000 UTC m=+0.125214217 container init 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 10:14:29 compute-0 podman[76473]: 2025-12-04 10:14:29.740322237 +0000 UTC m=+0.134834581 container start 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:29 compute-0 podman[76473]: 2025-12-04 10:14:29.744035233 +0000 UTC m=+0.138547577 container attach 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:29 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:29 compute-0 ceph-mon[75358]: [04/Dec/2025:10:14:28] ENGINE Bus STARTING
Dec 04 10:14:29 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2352588894' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 04 10:14:29 compute-0 ceph-mon[75358]: mgrmap e8: compute-0.iwufnj(active, since 2s)
Dec 04 10:14:29 compute-0 ceph-mon[75358]: [04/Dec/2025:10:14:28] ENGINE Serving on https://192.168.122.100:7150
Dec 04 10:14:29 compute-0 ceph-mon[75358]: [04/Dec/2025:10:14:28] ENGINE Client ('192.168.122.100', 48252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 04 10:14:29 compute-0 ceph-mon[75358]: [04/Dec/2025:10:14:29] ENGINE Serving on http://192.168.122.100:8765
Dec 04 10:14:29 compute-0 ceph-mon[75358]: [04/Dec/2025:10:14:29] ENGINE Bus STARTED
Dec 04 10:14:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:29 compute-0 ceph-mon[75358]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 04 10:14:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: [cephadm INFO root] Set ssh ssh_user
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 04 10:14:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 04 10:14:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: [cephadm INFO root] Set ssh ssh_config
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 04 10:14:30 compute-0 heuristic_benz[76489]: ssh user set to ceph-admin. sudo will be used
Dec 04 10:14:30 compute-0 systemd[1]: libpod-8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5.scope: Deactivated successfully.
Dec 04 10:14:30 compute-0 podman[76473]: 2025-12-04 10:14:30.179079719 +0000 UTC m=+0.573592043 container died 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4507dcd3d3055aed2c5e42582ff4b72cb468e47f7f44a57b5e3cb637d596bf8-merged.mount: Deactivated successfully.
Dec 04 10:14:30 compute-0 podman[76473]: 2025-12-04 10:14:30.220920663 +0000 UTC m=+0.615433027 container remove 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:30 compute-0 systemd[1]: libpod-conmon-8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5.scope: Deactivated successfully.
Dec 04 10:14:30 compute-0 podman[76528]: 2025-12-04 10:14:30.284400516 +0000 UTC m=+0.043639627 container create 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:14:30 compute-0 systemd[1]: Started libpod-conmon-210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef.scope.
Dec 04 10:14:30 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 podman[76528]: 2025-12-04 10:14:30.358002722 +0000 UTC m=+0.117241863 container init 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:14:30 compute-0 podman[76528]: 2025-12-04 10:14:30.262526363 +0000 UTC m=+0.021765494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:30 compute-0 podman[76528]: 2025-12-04 10:14:30.368570452 +0000 UTC m=+0.127809563 container start 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:30 compute-0 podman[76528]: 2025-12-04 10:14:30.373092334 +0000 UTC m=+0.132331475 container attach 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 04 10:14:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: [cephadm INFO root] Set ssh private key
Dec 04 10:14:30 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 04 10:14:30 compute-0 systemd[1]: libpod-210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef.scope: Deactivated successfully.
Dec 04 10:14:30 compute-0 podman[76528]: 2025-12-04 10:14:30.797091871 +0000 UTC m=+0.556331022 container died 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 04 10:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1-merged.mount: Deactivated successfully.
Dec 04 10:14:30 compute-0 podman[76528]: 2025-12-04 10:14:30.845044715 +0000 UTC m=+0.604283826 container remove 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:14:30 compute-0 systemd[1]: libpod-conmon-210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef.scope: Deactivated successfully.
Dec 04 10:14:30 compute-0 podman[76583]: 2025-12-04 10:14:30.906032233 +0000 UTC m=+0.043684418 container create 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:30 compute-0 systemd[1]: Started libpod-conmon-6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f.scope.
Dec 04 10:14:30 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:30 compute-0 podman[76583]: 2025-12-04 10:14:30.883877395 +0000 UTC m=+0.021529520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:30 compute-0 podman[76583]: 2025-12-04 10:14:30.983499759 +0000 UTC m=+0.121151894 container init 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:30 compute-0 podman[76583]: 2025-12-04 10:14:30.996847899 +0000 UTC m=+0.134500034 container start 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:14:31 compute-0 podman[76583]: 2025-12-04 10:14:31.001185407 +0000 UTC m=+0.138837552 container attach 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:14:31 compute-0 ceph-mon[75358]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:31 compute-0 ceph-mon[75358]: Set ssh ssh_user
Dec 04 10:14:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:31 compute-0 ceph-mon[75358]: Set ssh ssh_config
Dec 04 10:14:31 compute-0 ceph-mon[75358]: ssh user set to ceph-admin. sudo will be used
Dec 04 10:14:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 04 10:14:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:31 compute-0 ceph-mgr[75651]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 04 10:14:31 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 04 10:14:31 compute-0 systemd[1]: libpod-6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f.scope: Deactivated successfully.
Dec 04 10:14:31 compute-0 conmon[76599]: conmon 6008233402b8387d5428 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f.scope/container/memory.events
Dec 04 10:14:31 compute-0 podman[76583]: 2025-12-04 10:14:31.492554468 +0000 UTC m=+0.630206613 container died 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e-merged.mount: Deactivated successfully.
Dec 04 10:14:31 compute-0 podman[76583]: 2025-12-04 10:14:31.534509753 +0000 UTC m=+0.672161868 container remove 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 04 10:14:31 compute-0 systemd[1]: libpod-conmon-6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f.scope: Deactivated successfully.
Dec 04 10:14:31 compute-0 podman[76637]: 2025-12-04 10:14:31.605325259 +0000 UTC m=+0.046726573 container create ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:31 compute-0 systemd[1]: Started libpod-conmon-ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7.scope.
Dec 04 10:14:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019893108 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:14:31 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd249adb0183874f83aa2c3b956a5fa1302149df10cbf09838993242aad2c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd249adb0183874f83aa2c3b956a5fa1302149df10cbf09838993242aad2c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd249adb0183874f83aa2c3b956a5fa1302149df10cbf09838993242aad2c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:31 compute-0 podman[76637]: 2025-12-04 10:14:31.589211438 +0000 UTC m=+0.030612782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:31 compute-0 podman[76637]: 2025-12-04 10:14:31.684783741 +0000 UTC m=+0.126185075 container init ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:31 compute-0 podman[76637]: 2025-12-04 10:14:31.693741202 +0000 UTC m=+0.135142526 container start ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:31 compute-0 podman[76637]: 2025-12-04 10:14:31.701144785 +0000 UTC m=+0.142546139 container attach ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:31 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:32 compute-0 practical_snyder[76654]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYFo9w32Y96e7dxBpF4AJjs+qlgFcxOYjX05DtMUxNXFtUOf3ObodRfd0pD647pbzXkmOGGdM76hP956QwlOFiOK0OvMcbb/tkvWQawR2eOE+jl9eDC5G3Ok7ABwMpCNVqOQq/RQihqOVMXikT3NjUSrY34kxFvZm15o3mQlZu6Or1dZh+cXdm8+++GhM5tGjgfOuaOyJP0/didvf8CNuryXN/iH03ct33wRVlDtnIL1xqkpOhCnnjSFrcNhwudKQrA+yKZ00BHF0ZiiR43oxJRZH7yT847dgxrxBfPfD9zXof9tRuweMdgN0o75/kcjbJVkzsunOsBVRzOAp1R5h7qs0Ik1P/QwZczTZvyrlHW9ypgSZZbKqxGsyrhwz0UpVsMo2JGLWrs43tmKC6U9Rsm38X231jzwX8ii2XKVm4jnZleR5zK+KPesG8eYwgE4iVz4npBCt01eglKX96cA5jOURbqXiydJl1JXkbg+IggecbDre8NW3PfmL0hy9faQ8= zuul@controller
Dec 04 10:14:32 compute-0 systemd[1]: libpod-ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7.scope: Deactivated successfully.
Dec 04 10:14:32 compute-0 podman[76680]: 2025-12-04 10:14:32.22331471 +0000 UTC m=+0.048281280 container died ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bbd249adb0183874f83aa2c3b956a5fa1302149df10cbf09838993242aad2c8-merged.mount: Deactivated successfully.
Dec 04 10:14:32 compute-0 podman[76680]: 2025-12-04 10:14:32.262806081 +0000 UTC m=+0.087772571 container remove ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:14:32 compute-0 systemd[1]: libpod-conmon-ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7.scope: Deactivated successfully.
Dec 04 10:14:32 compute-0 podman[76695]: 2025-12-04 10:14:32.33993782 +0000 UTC m=+0.047303642 container create 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:32 compute-0 systemd[1]: Started libpod-conmon-20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3.scope.
Dec 04 10:14:32 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbbf0f7d20f1bfd0432ae6d9f0f94fc52bf857134aec9b83d7f1dd123c316ad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbbf0f7d20f1bfd0432ae6d9f0f94fc52bf857134aec9b83d7f1dd123c316ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbbf0f7d20f1bfd0432ae6d9f0f94fc52bf857134aec9b83d7f1dd123c316ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:32 compute-0 podman[76695]: 2025-12-04 10:14:32.319548814 +0000 UTC m=+0.026914656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:32 compute-0 podman[76695]: 2025-12-04 10:14:32.423995124 +0000 UTC m=+0.131360956 container init 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 04 10:14:32 compute-0 podman[76695]: 2025-12-04 10:14:32.43815562 +0000 UTC m=+0.145521432 container start 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:14:32 compute-0 podman[76695]: 2025-12-04 10:14:32.442124281 +0000 UTC m=+0.149490093 container attach 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:14:32 compute-0 ceph-mon[75358]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:32 compute-0 ceph-mon[75358]: Set ssh ssh_identity_key
Dec 04 10:14:32 compute-0 ceph-mon[75358]: Set ssh private key
Dec 04 10:14:32 compute-0 ceph-mon[75358]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:32 compute-0 ceph-mon[75358]: Set ssh ssh_identity_pub
Dec 04 10:14:32 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:33 compute-0 sshd-session[76737]: Accepted publickey for ceph-admin from 192.168.122.100 port 57956 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:33 compute-0 systemd-logind[798]: New session 21 of user ceph-admin.
Dec 04 10:14:33 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 04 10:14:33 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 04 10:14:33 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 04 10:14:33 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 04 10:14:33 compute-0 systemd[76741]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:33 compute-0 systemd[76741]: Queued start job for default target Main User Target.
Dec 04 10:14:33 compute-0 sshd-session[76754]: Accepted publickey for ceph-admin from 192.168.122.100 port 53524 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:33 compute-0 systemd-logind[798]: New session 23 of user ceph-admin.
Dec 04 10:14:33 compute-0 systemd[76741]: Created slice User Application Slice.
Dec 04 10:14:33 compute-0 systemd[76741]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 04 10:14:33 compute-0 systemd[76741]: Started Daily Cleanup of User's Temporary Directories.
Dec 04 10:14:33 compute-0 systemd[76741]: Reached target Paths.
Dec 04 10:14:33 compute-0 systemd[76741]: Reached target Timers.
Dec 04 10:14:33 compute-0 systemd[76741]: Starting D-Bus User Message Bus Socket...
Dec 04 10:14:33 compute-0 systemd[76741]: Starting Create User's Volatile Files and Directories...
Dec 04 10:14:33 compute-0 systemd[76741]: Listening on D-Bus User Message Bus Socket.
Dec 04 10:14:33 compute-0 systemd[76741]: Reached target Sockets.
Dec 04 10:14:33 compute-0 systemd[76741]: Finished Create User's Volatile Files and Directories.
Dec 04 10:14:33 compute-0 systemd[76741]: Reached target Basic System.
Dec 04 10:14:33 compute-0 systemd[76741]: Reached target Main User Target.
Dec 04 10:14:33 compute-0 systemd[76741]: Startup finished in 148ms.
Dec 04 10:14:33 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 04 10:14:33 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Dec 04 10:14:33 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 04 10:14:33 compute-0 sshd-session[76737]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:33 compute-0 sshd-session[76754]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:33 compute-0 sudo[76762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:33 compute-0 sudo[76762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:33 compute-0 sudo[76762]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:33 compute-0 ceph-mon[75358]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:33 compute-0 sshd-session[76788]: Accepted publickey for ceph-admin from 192.168.122.100 port 53528 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:33 compute-0 systemd-logind[798]: New session 24 of user ceph-admin.
Dec 04 10:14:33 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 04 10:14:33 compute-0 sshd-session[76788]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:33 compute-0 sudo[76792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 04 10:14:33 compute-0 sudo[76792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:33 compute-0 sudo[76792]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:33 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:33 compute-0 sshd-session[76817]: Accepted publickey for ceph-admin from 192.168.122.100 port 53534 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:34 compute-0 systemd-logind[798]: New session 25 of user ceph-admin.
Dec 04 10:14:34 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 04 10:14:34 compute-0 sshd-session[76817]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:34 compute-0 sudo[76821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 04 10:14:34 compute-0 sudo[76821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:34 compute-0 sudo[76821]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:34 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 04 10:14:34 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 04 10:14:34 compute-0 sshd-session[76846]: Accepted publickey for ceph-admin from 192.168.122.100 port 53542 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:34 compute-0 systemd-logind[798]: New session 26 of user ceph-admin.
Dec 04 10:14:34 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 04 10:14:34 compute-0 sshd-session[76846]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:34 compute-0 ceph-mon[75358]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:34 compute-0 sudo[76850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:34 compute-0 sudo[76850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:34 compute-0 sudo[76850]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:34 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:34 compute-0 sshd-session[76875]: Accepted publickey for ceph-admin from 192.168.122.100 port 53548 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:34 compute-0 systemd-logind[798]: New session 27 of user ceph-admin.
Dec 04 10:14:34 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 04 10:14:34 compute-0 sshd-session[76875]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:34 compute-0 sudo[76879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:34 compute-0 sudo[76879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:34 compute-0 sudo[76879]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:35 compute-0 sshd-session[76904]: Accepted publickey for ceph-admin from 192.168.122.100 port 53564 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:35 compute-0 systemd-logind[798]: New session 28 of user ceph-admin.
Dec 04 10:14:35 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 04 10:14:35 compute-0 sshd-session[76904]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:35 compute-0 sudo[76908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 04 10:14:35 compute-0 sudo[76908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:35 compute-0 sudo[76908]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:35 compute-0 sshd-session[76933]: Accepted publickey for ceph-admin from 192.168.122.100 port 53570 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:35 compute-0 systemd-logind[798]: New session 29 of user ceph-admin.
Dec 04 10:14:35 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 04 10:14:35 compute-0 sshd-session[76933]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:35 compute-0 ceph-mon[75358]: Deploying cephadm binary to compute-0
Dec 04 10:14:35 compute-0 sudo[76937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:35 compute-0 sudo[76937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:35 compute-0 sudo[76937]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:35 compute-0 sshd-session[76962]: Accepted publickey for ceph-admin from 192.168.122.100 port 53586 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:35 compute-0 systemd-logind[798]: New session 30 of user ceph-admin.
Dec 04 10:14:35 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 04 10:14:35 compute-0 sshd-session[76962]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:35 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:35 compute-0 sudo[76966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 04 10:14:35 compute-0 sudo[76966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:35 compute-0 sudo[76966]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:36 compute-0 sshd-session[76991]: Accepted publickey for ceph-admin from 192.168.122.100 port 53600 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:36 compute-0 systemd-logind[798]: New session 31 of user ceph-admin.
Dec 04 10:14:36 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 04 10:14:36 compute-0 sshd-session[76991]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052456 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:14:36 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:37 compute-0 sshd-session[77018]: Accepted publickey for ceph-admin from 192.168.122.100 port 53608 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:37 compute-0 systemd-logind[798]: New session 32 of user ceph-admin.
Dec 04 10:14:37 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 04 10:14:37 compute-0 sshd-session[77018]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:37 compute-0 sudo[77022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 04 10:14:37 compute-0 sudo[77022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:37 compute-0 sudo[77022]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:37 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:37 compute-0 sshd-session[77047]: Accepted publickey for ceph-admin from 192.168.122.100 port 53612 ssh2: RSA SHA256:Mk2kZkwP1BzTEMCUVWrX+pJKq59RMfTSYlnhg3yccqc
Dec 04 10:14:37 compute-0 systemd-logind[798]: New session 33 of user ceph-admin.
Dec 04 10:14:37 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 04 10:14:37 compute-0 sshd-session[77047]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 04 10:14:38 compute-0 sudo[77051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 04 10:14:38 compute-0 sudo[77051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:38 compute-0 sudo[77051]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 04 10:14:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:38 compute-0 ceph-mgr[75651]: [cephadm INFO root] Added host compute-0
Dec 04 10:14:38 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 04 10:14:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 04 10:14:38 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:38 compute-0 blissful_lamarr[76711]: Added host 'compute-0' with addr '192.168.122.100'
Dec 04 10:14:38 compute-0 sudo[77097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:38 compute-0 sudo[77097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:38 compute-0 systemd[1]: libpod-20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3.scope: Deactivated successfully.
Dec 04 10:14:38 compute-0 sudo[77097]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:38 compute-0 podman[76695]: 2025-12-04 10:14:38.403999557 +0000 UTC m=+6.111365379 container died 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cbbf0f7d20f1bfd0432ae6d9f0f94fc52bf857134aec9b83d7f1dd123c316ad-merged.mount: Deactivated successfully.
Dec 04 10:14:38 compute-0 podman[76695]: 2025-12-04 10:14:38.454901124 +0000 UTC m=+6.162266936 container remove 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:14:38 compute-0 systemd[1]: libpod-conmon-20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3.scope: Deactivated successfully.
Dec 04 10:14:38 compute-0 sudo[77124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Dec 04 10:14:38 compute-0 sudo[77124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:38 compute-0 podman[77160]: 2025-12-04 10:14:38.526695326 +0000 UTC m=+0.046868774 container create 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:38 compute-0 systemd[1]: Started libpod-conmon-9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c.scope.
Dec 04 10:14:38 compute-0 podman[77160]: 2025-12-04 10:14:38.505533555 +0000 UTC m=+0.025706963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:38 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08039ed1e5cf4e4a1b3fb24564d6c6f6e1450e49402ee15123d46cb961e96e78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08039ed1e5cf4e4a1b3fb24564d6c6f6e1450e49402ee15123d46cb961e96e78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08039ed1e5cf4e4a1b3fb24564d6c6f6e1450e49402ee15123d46cb961e96e78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:38 compute-0 podman[77160]: 2025-12-04 10:14:38.624670751 +0000 UTC m=+0.144844229 container init 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:38 compute-0 podman[77160]: 2025-12-04 10:14:38.634930756 +0000 UTC m=+0.155104164 container start 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:14:38 compute-0 podman[77160]: 2025-12-04 10:14:38.63907086 +0000 UTC m=+0.159244308 container attach 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:14:38 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:38 compute-0 sshd-session[76761]: Connection reset by authenticating user root 45.140.17.124 port 29992 [preauth]
Dec 04 10:14:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:39 compute-0 ceph-mgr[75651]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 04 10:14:39 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 04 10:14:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 04 10:14:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:39 compute-0 serene_herschel[77178]: Scheduled mon update...
Dec 04 10:14:39 compute-0 systemd[1]: libpod-9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c.scope: Deactivated successfully.
Dec 04 10:14:39 compute-0 podman[77230]: 2025-12-04 10:14:39.205570605 +0000 UTC m=+0.030931999 container died 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-08039ed1e5cf4e4a1b3fb24564d6c6f6e1450e49402ee15123d46cb961e96e78-merged.mount: Deactivated successfully.
Dec 04 10:14:39 compute-0 podman[77230]: 2025-12-04 10:14:39.239472785 +0000 UTC m=+0.064834179 container remove 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:39 compute-0 systemd[1]: libpod-conmon-9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c.scope: Deactivated successfully.
Dec 04 10:14:39 compute-0 podman[77246]: 2025-12-04 10:14:39.309981896 +0000 UTC m=+0.042629790 container create 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:14:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:39 compute-0 ceph-mon[75358]: Added host compute-0
Dec 04 10:14:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:14:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:39 compute-0 systemd[1]: Started libpod-conmon-5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad.scope.
Dec 04 10:14:39 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37083f51f2b0ad4df1aa9907d3638a9ba02ebc92e36c947fec304f1ea6fd3cec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37083f51f2b0ad4df1aa9907d3638a9ba02ebc92e36c947fec304f1ea6fd3cec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37083f51f2b0ad4df1aa9907d3638a9ba02ebc92e36c947fec304f1ea6fd3cec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:39 compute-0 podman[77246]: 2025-12-04 10:14:39.379264803 +0000 UTC m=+0.111912697 container init 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:39 compute-0 podman[77246]: 2025-12-04 10:14:39.287334497 +0000 UTC m=+0.019982441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:39 compute-0 podman[77246]: 2025-12-04 10:14:39.389918915 +0000 UTC m=+0.122566809 container start 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 04 10:14:39 compute-0 podman[77246]: 2025-12-04 10:14:39.393768284 +0000 UTC m=+0.126416208 container attach 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:14:39 compute-0 podman[77213]: 2025-12-04 10:14:39.578502911 +0000 UTC m=+0.780647131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:39 compute-0 podman[77299]: 2025-12-04 10:14:39.69666637 +0000 UTC m=+0.048078597 container create 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:39 compute-0 systemd[1]: Started libpod-conmon-6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed.scope.
Dec 04 10:14:39 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:39 compute-0 podman[77299]: 2025-12-04 10:14:39.676806773 +0000 UTC m=+0.028219030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:39 compute-0 podman[77299]: 2025-12-04 10:14:39.768580315 +0000 UTC m=+0.119992572 container init 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:39 compute-0 podman[77299]: 2025-12-04 10:14:39.776643631 +0000 UTC m=+0.128055868 container start 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:14:39 compute-0 podman[77299]: 2025-12-04 10:14:39.780206364 +0000 UTC m=+0.131618641 container attach 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:14:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:39 compute-0 ceph-mgr[75651]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 04 10:14:39 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 04 10:14:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 04 10:14:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:39 compute-0 keen_cohen[77262]: Scheduled mgr update...
Dec 04 10:14:39 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:39 compute-0 systemd[1]: libpod-5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad.scope: Deactivated successfully.
Dec 04 10:14:39 compute-0 conmon[77262]: conmon 5fc18a6be8cf3fc71582 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad.scope/container/memory.events
Dec 04 10:14:39 compute-0 podman[77246]: 2025-12-04 10:14:39.872588068 +0000 UTC m=+0.605235962 container died 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 04 10:14:39 compute-0 modest_booth[77315]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 04 10:14:39 compute-0 systemd[1]: libpod-6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed.scope: Deactivated successfully.
Dec 04 10:14:39 compute-0 podman[77299]: 2025-12-04 10:14:39.896017211 +0000 UTC m=+0.247429438 container died 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-37083f51f2b0ad4df1aa9907d3638a9ba02ebc92e36c947fec304f1ea6fd3cec-merged.mount: Deactivated successfully.
Dec 04 10:14:39 compute-0 podman[77246]: 2025-12-04 10:14:39.920616914 +0000 UTC m=+0.653264808 container remove 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:39 compute-0 systemd[1]: libpod-conmon-5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad.scope: Deactivated successfully.
Dec 04 10:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-74a336f0dd93ebeb58a2693bd41cf71fb59c22b1e208973e0951397de169b045-merged.mount: Deactivated successfully.
Dec 04 10:14:39 compute-0 podman[77299]: 2025-12-04 10:14:39.955524783 +0000 UTC m=+0.306937040 container remove 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:14:39 compute-0 systemd[1]: libpod-conmon-6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed.scope: Deactivated successfully.
Dec 04 10:14:39 compute-0 podman[77343]: 2025-12-04 10:14:39.979982413 +0000 UTC m=+0.038679978 container create a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:40 compute-0 sudo[77124]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 04 10:14:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:40 compute-0 systemd[1]: Started libpod-conmon-a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588.scope.
Dec 04 10:14:40 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac64a5369934b20d4203704040de327b3a09cf057eef2f8cf01846b37af9b755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac64a5369934b20d4203704040de327b3a09cf057eef2f8cf01846b37af9b755/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac64a5369934b20d4203704040de327b3a09cf057eef2f8cf01846b37af9b755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:40 compute-0 podman[77343]: 2025-12-04 10:14:40.057203494 +0000 UTC m=+0.115901089 container init a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:14:40 compute-0 podman[77343]: 2025-12-04 10:14:39.962807764 +0000 UTC m=+0.021505329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:40 compute-0 podman[77343]: 2025-12-04 10:14:40.062906986 +0000 UTC m=+0.121604571 container start a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:40 compute-0 podman[77343]: 2025-12-04 10:14:40.067230734 +0000 UTC m=+0.125928289 container attach a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:14:40 compute-0 sudo[77366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:40 compute-0 sudo[77366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:40 compute-0 sudo[77366]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:40 compute-0 sudo[77393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 04 10:14:40 compute-0 sudo[77393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:40 compute-0 ceph-mon[75358]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:40 compute-0 ceph-mon[75358]: Saving service mon spec with placement count:5
Dec 04 10:14:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:40 compute-0 sudo[77393]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:40 compute-0 ceph-mgr[75651]: [cephadm INFO root] Saving service crash spec with placement *
Dec 04 10:14:40 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 04 10:14:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 04 10:14:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:40 compute-0 nifty_yalow[77363]: Scheduled crash update...
Dec 04 10:14:40 compute-0 systemd[1]: libpod-a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588.scope: Deactivated successfully.
Dec 04 10:14:40 compute-0 podman[77343]: 2025-12-04 10:14:40.50389725 +0000 UTC m=+0.562594815 container died a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 04 10:14:40 compute-0 sudo[77457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:40 compute-0 sudo[77457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:40 compute-0 sudo[77457]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac64a5369934b20d4203704040de327b3a09cf057eef2f8cf01846b37af9b755-merged.mount: Deactivated successfully.
Dec 04 10:14:40 compute-0 podman[77343]: 2025-12-04 10:14:40.543926111 +0000 UTC m=+0.602623656 container remove a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:40 compute-0 systemd[1]: libpod-conmon-a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588.scope: Deactivated successfully.
Dec 04 10:14:40 compute-0 sudo[77491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:14:40 compute-0 sudo[77491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:40 compute-0 podman[77517]: 2025-12-04 10:14:40.610579922 +0000 UTC m=+0.043402153 container create 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:14:40 compute-0 systemd[1]: Started libpod-conmon-7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865.scope.
Dec 04 10:14:40 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:40 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa2031bd071f8c042b08ead94904c572fef56292ab0bad56107991f6f633784/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa2031bd071f8c042b08ead94904c572fef56292ab0bad56107991f6f633784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa2031bd071f8c042b08ead94904c572fef56292ab0bad56107991f6f633784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:40 compute-0 podman[77517]: 2025-12-04 10:14:40.689066066 +0000 UTC m=+0.121888337 container init 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:40 compute-0 podman[77517]: 2025-12-04 10:14:40.594341149 +0000 UTC m=+0.027163420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:40 compute-0 podman[77517]: 2025-12-04 10:14:40.696769454 +0000 UTC m=+0.129591705 container start 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 10:14:40 compute-0 podman[77517]: 2025-12-04 10:14:40.703131299 +0000 UTC m=+0.135953580 container attach 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:41 compute-0 podman[77603]: 2025-12-04 10:14:41.068816855 +0000 UTC m=+0.085469730 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:14:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 04 10:14:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4136323843' entity='client.admin' 
Dec 04 10:14:41 compute-0 podman[77517]: 2025-12-04 10:14:41.144245164 +0000 UTC m=+0.577067395 container died 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 04 10:14:41 compute-0 systemd[1]: libpod-7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865.scope: Deactivated successfully.
Dec 04 10:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aa2031bd071f8c042b08ead94904c572fef56292ab0bad56107991f6f633784-merged.mount: Deactivated successfully.
Dec 04 10:14:41 compute-0 podman[77517]: 2025-12-04 10:14:41.184476358 +0000 UTC m=+0.617298589 container remove 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 04 10:14:41 compute-0 systemd[1]: libpod-conmon-7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865.scope: Deactivated successfully.
Dec 04 10:14:41 compute-0 podman[77603]: 2025-12-04 10:14:41.21451519 +0000 UTC m=+0.231168035 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:14:41 compute-0 podman[77638]: 2025-12-04 10:14:41.251859713 +0000 UTC m=+0.040063783 container create b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:41 compute-0 systemd[1]: Started libpod-conmon-b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8.scope.
Dec 04 10:14:41 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35d2607b3ca7e62076fe898742ae75275de30062ac6054d34c7d14e48b33a0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35d2607b3ca7e62076fe898742ae75275de30062ac6054d34c7d14e48b33a0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35d2607b3ca7e62076fe898742ae75275de30062ac6054d34c7d14e48b33a0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:41 compute-0 podman[77638]: 2025-12-04 10:14:41.232379692 +0000 UTC m=+0.020583782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:41 compute-0 podman[77638]: 2025-12-04 10:14:41.332721558 +0000 UTC m=+0.120925628 container init b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:41 compute-0 podman[77638]: 2025-12-04 10:14:41.339579592 +0000 UTC m=+0.127783672 container start b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:41 compute-0 podman[77638]: 2025-12-04 10:14:41.343264579 +0000 UTC m=+0.131468669 container attach b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:41 compute-0 ceph-mon[75358]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:41 compute-0 ceph-mon[75358]: Saving service mgr spec with placement count:2
Dec 04 10:14:41 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:41 compute-0 ceph-mon[75358]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:41 compute-0 ceph-mon[75358]: Saving service crash spec with placement *
Dec 04 10:14:41 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:41 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4136323843' entity='client.admin' 
Dec 04 10:14:41 compute-0 sudo[77491]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:41 compute-0 sudo[77734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:41 compute-0 sudo[77734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:41 compute-0 sudo[77734]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054699 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:14:41 compute-0 sudo[77759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:14:41 compute-0 sudo[77759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:41 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 04 10:14:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:41 compute-0 sshd-session[77227]: Connection reset by authenticating user root 45.140.17.124 port 29998 [preauth]
Dec 04 10:14:41 compute-0 systemd[1]: libpod-b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8.scope: Deactivated successfully.
Dec 04 10:14:41 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:41 compute-0 podman[77786]: 2025-12-04 10:14:41.874624479 +0000 UTC m=+0.030648473 container died b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b35d2607b3ca7e62076fe898742ae75275de30062ac6054d34c7d14e48b33a0b-merged.mount: Deactivated successfully.
Dec 04 10:14:41 compute-0 podman[77786]: 2025-12-04 10:14:41.918142033 +0000 UTC m=+0.074165967 container remove b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:41 compute-0 systemd[1]: libpod-conmon-b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8.scope: Deactivated successfully.
Dec 04 10:14:41 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77810 (sysctl)
Dec 04 10:14:41 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 04 10:14:41 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 04 10:14:41 compute-0 podman[77811]: 2025-12-04 10:14:41.997131566 +0000 UTC m=+0.050382678 container create 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:14:42 compute-0 systemd[1]: Started libpod-conmon-43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8.scope.
Dec 04 10:14:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:42 compute-0 podman[77811]: 2025-12-04 10:14:41.979908627 +0000 UTC m=+0.033159749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d85d3e35e152d7202f80b284c65e04e5e1b07f282a71d246f4053fbe2688e2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d85d3e35e152d7202f80b284c65e04e5e1b07f282a71d246f4053fbe2688e2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d85d3e35e152d7202f80b284c65e04e5e1b07f282a71d246f4053fbe2688e2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:42 compute-0 podman[77811]: 2025-12-04 10:14:42.188368071 +0000 UTC m=+0.241619203 container init 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:14:42 compute-0 podman[77811]: 2025-12-04 10:14:42.196223622 +0000 UTC m=+0.249474734 container start 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 04 10:14:42 compute-0 podman[77811]: 2025-12-04 10:14:42.199495161 +0000 UTC m=+0.252746273 container attach 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:42 compute-0 sudo[77759]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:42 compute-0 sudo[77873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:42 compute-0 sudo[77873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:42 compute-0 sudo[77873]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:42 compute-0 sudo[77899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 04 10:14:42 compute-0 sudo[77899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 04 10:14:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:42 compute-0 ceph-mgr[75651]: [cephadm INFO root] Added label _admin to host compute-0
Dec 04 10:14:42 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 04 10:14:42 compute-0 vibrant_wozniak[77832]: Added label _admin to host compute-0
Dec 04 10:14:42 compute-0 systemd[1]: libpod-43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8.scope: Deactivated successfully.
Dec 04 10:14:42 compute-0 podman[77811]: 2025-12-04 10:14:42.613938126 +0000 UTC m=+0.667189248 container died 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 04 10:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d85d3e35e152d7202f80b284c65e04e5e1b07f282a71d246f4053fbe2688e2a-merged.mount: Deactivated successfully.
Dec 04 10:14:42 compute-0 podman[77811]: 2025-12-04 10:14:42.650127878 +0000 UTC m=+0.703378990 container remove 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 04 10:14:42 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:42 compute-0 systemd[1]: libpod-conmon-43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8.scope: Deactivated successfully.
Dec 04 10:14:42 compute-0 sudo[77899]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:42 compute-0 podman[77949]: 2025-12-04 10:14:42.721232639 +0000 UTC m=+0.043237770 container create e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 04 10:14:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:42 compute-0 systemd[1]: Started libpod-conmon-e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec.scope.
Dec 04 10:14:42 compute-0 podman[77949]: 2025-12-04 10:14:42.700664668 +0000 UTC m=+0.022669669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:42 compute-0 sudo[77970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:42 compute-0 sudo[77970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c8748a413738f66867ec4396b32fe55d3b61d29bf282739e681fd1fe55649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c8748a413738f66867ec4396b32fe55d3b61d29bf282739e681fd1fe55649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c8748a413738f66867ec4396b32fe55d3b61d29bf282739e681fd1fe55649/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:42 compute-0 sudo[77970]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:42 compute-0 podman[77949]: 2025-12-04 10:14:42.825084339 +0000 UTC m=+0.147089330 container init e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:42 compute-0 podman[77949]: 2025-12-04 10:14:42.83289458 +0000 UTC m=+0.154899551 container start e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 04 10:14:42 compute-0 podman[77949]: 2025-12-04 10:14:42.836322962 +0000 UTC m=+0.158327933 container attach e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Dec 04 10:14:42 compute-0 sudo[78000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- inventory --format=json-pretty --filter-for-batch
Dec 04 10:14:42 compute-0 sudo[78000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:43 compute-0 podman[78057]: 2025-12-04 10:14:43.157809213 +0000 UTC m=+0.042989466 container create 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:14:43 compute-0 systemd[1]: Started libpod-conmon-4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8.scope.
Dec 04 10:14:43 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:43 compute-0 podman[78057]: 2025-12-04 10:14:43.134037135 +0000 UTC m=+0.019217398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:14:43 compute-0 podman[78057]: 2025-12-04 10:14:43.237651911 +0000 UTC m=+0.122832184 container init 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:43 compute-0 podman[78057]: 2025-12-04 10:14:43.244372832 +0000 UTC m=+0.129553085 container start 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:43 compute-0 crazy_tu[78073]: 167 167
Dec 04 10:14:43 compute-0 systemd[1]: libpod-4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8.scope: Deactivated successfully.
Dec 04 10:14:43 compute-0 podman[78057]: 2025-12-04 10:14:43.248343333 +0000 UTC m=+0.133523596 container attach 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 04 10:14:43 compute-0 podman[78057]: 2025-12-04 10:14:43.2486726 +0000 UTC m=+0.133852843 container died 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f93a1bf0568521a92dafce3f2fe9774dcf55d63cdb22ce05b7c6bf145171a2d-merged.mount: Deactivated successfully.
Dec 04 10:14:43 compute-0 podman[78057]: 2025-12-04 10:14:43.284452644 +0000 UTC m=+0.169632887 container remove 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:43 compute-0 systemd[1]: libpod-conmon-4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8.scope: Deactivated successfully.
Dec 04 10:14:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 04 10:14:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2503549990' entity='client.admin' 
Dec 04 10:14:43 compute-0 fervent_chebyshev[77993]: set mgr/dashboard/cluster/status
Dec 04 10:14:43 compute-0 systemd[1]: libpod-e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec.scope: Deactivated successfully.
Dec 04 10:14:43 compute-0 podman[77949]: 2025-12-04 10:14:43.439515667 +0000 UTC m=+0.761520638 container died e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b06c8748a413738f66867ec4396b32fe55d3b61d29bf282739e681fd1fe55649-merged.mount: Deactivated successfully.
Dec 04 10:14:43 compute-0 podman[77949]: 2025-12-04 10:14:43.521225778 +0000 UTC m=+0.843230749 container remove e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 04 10:14:43 compute-0 systemd[1]: libpod-conmon-e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec.scope: Deactivated successfully.
Dec 04 10:14:43 compute-0 systemd[1]: Reloading.
Dec 04 10:14:43 compute-0 systemd-rc-local-generator[78132]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:43 compute-0 systemd-sysv-generator[78137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:43 compute-0 ceph-mon[75358]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:43 compute-0 ceph-mon[75358]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:43 compute-0 ceph-mon[75358]: Added label _admin to host compute-0
Dec 04 10:14:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:43 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2503549990' entity='client.admin' 
Dec 04 10:14:43 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:43 compute-0 sshd-session[77821]: Invalid user squid from 45.140.17.124 port 59912
Dec 04 10:14:43 compute-0 sudo[74307]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:44 compute-0 podman[78150]: 2025-12-04 10:14:44.109393832 +0000 UTC m=+0.043464414 container create 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Dec 04 10:14:44 compute-0 systemd[1]: Started libpod-conmon-7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd.scope.
Dec 04 10:14:44 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:44 compute-0 podman[78150]: 2025-12-04 10:14:44.09205401 +0000 UTC m=+0.026124632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:14:44 compute-0 podman[78150]: 2025-12-04 10:14:44.195321561 +0000 UTC m=+0.129392163 container init 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:44 compute-0 podman[78150]: 2025-12-04 10:14:44.203864224 +0000 UTC m=+0.137934826 container start 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:44 compute-0 podman[78150]: 2025-12-04 10:14:44.207313546 +0000 UTC m=+0.141384118 container attach 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:14:44 compute-0 sudo[78194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syxdhgqgadnfkicwwyblyjtixywmscty ; /usr/bin/python3'
Dec 04 10:14:44 compute-0 sudo[78194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:44 compute-0 sshd-session[77821]: Connection reset by invalid user squid 45.140.17.124 port 59912 [preauth]
Dec 04 10:14:44 compute-0 python3[78196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:14:44 compute-0 podman[78202]: 2025-12-04 10:14:44.568326059 +0000 UTC m=+0.059030714 container create cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:14:44 compute-0 systemd[1]: Started libpod-conmon-cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0.scope.
Dec 04 10:14:44 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:44 compute-0 podman[78202]: 2025-12-04 10:14:44.538911479 +0000 UTC m=+0.029616184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeec76d5f35643d64f93c9548a35a58751763521623d82aef676659f421c32f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeec76d5f35643d64f93c9548a35a58751763521623d82aef676659f421c32f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:44 compute-0 podman[78202]: 2025-12-04 10:14:44.652557586 +0000 UTC m=+0.143262231 container init cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:44 compute-0 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 04 10:14:44 compute-0 podman[78202]: 2025-12-04 10:14:44.661368585 +0000 UTC m=+0.152073210 container start cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:44 compute-0 podman[78202]: 2025-12-04 10:14:44.683498923 +0000 UTC m=+0.174203538 container attach cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:44 compute-0 mystifying_newton[78166]: [
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:     {
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         "available": false,
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         "being_replaced": false,
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         "ceph_device_lvm": false,
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         "lsm_data": {},
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         "lvs": [],
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         "path": "/dev/sr0",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         "rejected_reasons": [
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "Has a FileSystem",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "Insufficient space (<5GB)"
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         ],
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         "sys_api": {
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "actuators": null,
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "device_nodes": [
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:                 "sr0"
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             ],
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "devname": "sr0",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "human_readable_size": "482.00 KB",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "id_bus": "ata",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "model": "QEMU DVD-ROM",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "nr_requests": "2",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "parent": "/dev/sr0",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "partitions": {},
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "path": "/dev/sr0",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "removable": "1",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "rev": "2.5+",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "ro": "0",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "rotational": "1",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "sas_address": "",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "sas_device_handle": "",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "scheduler_mode": "mq-deadline",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "sectors": 0,
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "sectorsize": "2048",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "size": 493568.0,
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "support_discard": "2048",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "type": "disk",
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:             "vendor": "QEMU"
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:         }
Dec 04 10:14:44 compute-0 mystifying_newton[78166]:     }
Dec 04 10:14:44 compute-0 mystifying_newton[78166]: ]
Dec 04 10:14:44 compute-0 systemd[1]: libpod-7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd.scope: Deactivated successfully.
Dec 04 10:14:44 compute-0 podman[78150]: 2025-12-04 10:14:44.75825894 +0000 UTC m=+0.692329532 container died 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522-merged.mount: Deactivated successfully.
Dec 04 10:14:44 compute-0 podman[78150]: 2025-12-04 10:14:44.870285708 +0000 UTC m=+0.804356290 container remove 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:14:44 compute-0 systemd[1]: libpod-conmon-7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd.scope: Deactivated successfully.
Dec 04 10:14:44 compute-0 sudo[78000]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:14:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:14:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 04 10:14:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:14:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:14:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:14:44 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 04 10:14:44 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 04 10:14:44 compute-0 sudo[79005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 04 10:14:44 compute-0 sudo[79005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79005]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 sudo[79030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph
Dec 04 10:14:45 compute-0 sudo[79030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79030]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 04 10:14:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3564853227' entity='client.admin' 
Dec 04 10:14:45 compute-0 systemd[1]: libpod-cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0.scope: Deactivated successfully.
Dec 04 10:14:45 compute-0 podman[78202]: 2025-12-04 10:14:45.095264221 +0000 UTC m=+0.585968906 container died cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-adeec76d5f35643d64f93c9548a35a58751763521623d82aef676659f421c32f-merged.mount: Deactivated successfully.
Dec 04 10:14:45 compute-0 podman[78202]: 2025-12-04 10:14:45.157847827 +0000 UTC m=+0.648552482 container remove cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 04 10:14:45 compute-0 sudo[79056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.conf.new
Dec 04 10:14:45 compute-0 sudo[79056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79056]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 systemd[1]: libpod-conmon-cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0.scope: Deactivated successfully.
Dec 04 10:14:45 compute-0 sudo[78194]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 sudo[79093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:45 compute-0 sudo[79093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79093]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 sudo[79118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.conf.new
Dec 04 10:14:45 compute-0 sudo[79118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79118]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 sudo[79166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.conf.new
Dec 04 10:14:45 compute-0 sudo[79166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79166]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 sudo[79191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.conf.new
Dec 04 10:14:45 compute-0 sudo[79191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79191]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 sudo[79216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 04 10:14:45 compute-0 sudo[79216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79216]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf
Dec 04 10:14:45 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf
Dec 04 10:14:45 compute-0 sudo[79285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config
Dec 04 10:14:45 compute-0 sudo[79285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79285]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 sudo[79331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config
Dec 04 10:14:45 compute-0 sudo[79331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79331]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 sudo[79366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf.new
Dec 04 10:14:45 compute-0 sudo[79366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79366]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:45 compute-0 sudo[79391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:45 compute-0 sudo[79391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79391]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:14:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:14:45 compute-0 ceph-mon[75358]: Updating compute-0:/etc/ceph/ceph.conf
Dec 04 10:14:45 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3564853227' entity='client.admin' 
Dec 04 10:14:45 compute-0 ceph-mon[75358]: Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf
Dec 04 10:14:45 compute-0 sudo[79428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf.new
Dec 04 10:14:45 compute-0 sudo[79428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:45 compute-0 sudo[79428]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 sudo[79553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdwuyzxsfrggwdhtlljrpxvvjgjwognr ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764843285.524177-36393-56964339261068/async_wrapper.py j52032110365 30 /home/zuul/.ansible/tmp/ansible-tmp-1764843285.524177-36393-56964339261068/AnsiballZ_command.py _'
Dec 04 10:14:46 compute-0 sudo[79553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:46 compute-0 sudo[79521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf.new
Dec 04 10:14:46 compute-0 sudo[79521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79521]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 sudo[79564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf.new
Dec 04 10:14:46 compute-0 sudo[79564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79564]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 sudo[79589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf.new /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf
Dec 04 10:14:46 compute-0 sudo[79589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 ansible-async_wrapper.py[79561]: Invoked with j52032110365 30 /home/zuul/.ansible/tmp/ansible-tmp-1764843285.524177-36393-56964339261068/AnsiballZ_command.py _
Dec 04 10:14:46 compute-0 ansible-async_wrapper.py[79616]: Starting module and watcher
Dec 04 10:14:46 compute-0 ansible-async_wrapper.py[79616]: Start watching 79617 (30)
Dec 04 10:14:46 compute-0 ansible-async_wrapper.py[79617]: Start module (79617)
Dec 04 10:14:46 compute-0 sudo[79589]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 ansible-async_wrapper.py[79561]: Return async_wrapper task started.
Dec 04 10:14:46 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 04 10:14:46 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 04 10:14:46 compute-0 sudo[79553]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 sudo[79619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 04 10:14:46 compute-0 sudo[79619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79619]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 sudo[79644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph
Dec 04 10:14:46 compute-0 sudo[79644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79644]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 python3[79618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:14:46 compute-0 sudo[79669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.client.admin.keyring.new
Dec 04 10:14:46 compute-0 sudo[79669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79669]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 sshd-session[78215]: Connection reset by authenticating user root 45.140.17.124 port 59924 [preauth]
Dec 04 10:14:46 compute-0 podman[79692]: 2025-12-04 10:14:46.418152958 +0000 UTC m=+0.051249884 container create c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:46 compute-0 sudo[79700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:46 compute-0 sudo[79700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79700]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 systemd[1]: Started libpod-conmon-c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e.scope.
Dec 04 10:14:46 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c58a1ffd35261111e7088b5821debf25250507007a72995c5a214708424a54/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c58a1ffd35261111e7088b5821debf25250507007a72995c5a214708424a54/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:46 compute-0 sudo[79734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.client.admin.keyring.new
Dec 04 10:14:46 compute-0 sudo[79734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 podman[79692]: 2025-12-04 10:14:46.395857456 +0000 UTC m=+0.028954432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:46 compute-0 sudo[79734]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 podman[79692]: 2025-12-04 10:14:46.495230256 +0000 UTC m=+0.128327212 container init c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:46 compute-0 podman[79692]: 2025-12-04 10:14:46.502347705 +0000 UTC m=+0.135444641 container start c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:14:46 compute-0 podman[79692]: 2025-12-04 10:14:46.505648464 +0000 UTC m=+0.138745440 container attach c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:46 compute-0 sudo[79786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.client.admin.keyring.new
Dec 04 10:14:46 compute-0 sudo[79786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79786]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 sudo[79813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.client.admin.keyring.new
Dec 04 10:14:46 compute-0 sudo[79813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:14:46 compute-0 sudo[79813]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 ceph-mgr[75651]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 04 10:14:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:46 compute-0 ceph-mon[75358]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 04 10:14:46 compute-0 sudo[79856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 04 10:14:46 compute-0 sudo[79856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79856]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring
Dec 04 10:14:46 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring
Dec 04 10:14:46 compute-0 sudo[79881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config
Dec 04 10:14:46 compute-0 sudo[79881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79881]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 sudo[79906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config
Dec 04 10:14:46 compute-0 sudo[79906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79906]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 sudo[79931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring.new
Dec 04 10:14:46 compute-0 sudo[79931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79931]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:14:46 compute-0 musing_mahavira[79751]: 
Dec 04 10:14:46 compute-0 musing_mahavira[79751]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 04 10:14:46 compute-0 sudo[79956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:46 compute-0 sudo[79956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79956]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:46 compute-0 systemd[1]: libpod-c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e.scope: Deactivated successfully.
Dec 04 10:14:46 compute-0 podman[79692]: 2025-12-04 10:14:46.927677345 +0000 UTC m=+0.560774281 container died c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:14:46 compute-0 ceph-mon[75358]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 04 10:14:46 compute-0 ceph-mon[75358]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:46 compute-0 ceph-mon[75358]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 04 10:14:46 compute-0 ceph-mon[75358]: Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring
Dec 04 10:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c58a1ffd35261111e7088b5821debf25250507007a72995c5a214708424a54-merged.mount: Deactivated successfully.
Dec 04 10:14:46 compute-0 podman[79692]: 2025-12-04 10:14:46.965368624 +0000 UTC m=+0.598465560 container remove c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:14:46 compute-0 systemd[1]: libpod-conmon-c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e.scope: Deactivated successfully.
Dec 04 10:14:46 compute-0 ansible-async_wrapper.py[79617]: Module complete (79617)
Dec 04 10:14:46 compute-0 sudo[79985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring.new
Dec 04 10:14:46 compute-0 sudo[79985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:46 compute-0 sudo[79985]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:47 compute-0 sudo[80043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring.new
Dec 04 10:14:47 compute-0 sudo[80043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:47 compute-0 sudo[80043]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:47 compute-0 sudo[80068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring.new
Dec 04 10:14:47 compute-0 sudo[80068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:47 compute-0 sudo[80068]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:47 compute-0 sudo[80093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring.new /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring
Dec 04 10:14:47 compute-0 sudo[80093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:47 compute-0 sudo[80093]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:14:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:14:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:47 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev 4d36a4e2-6b74-4f7d-9d51-5dbcc8b76310 (Updating crash deployment (+1 -> 1))
Dec 04 10:14:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 04 10:14:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 04 10:14:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 04 10:14:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:47 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 04 10:14:47 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 04 10:14:47 compute-0 sudo[80126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:47 compute-0 sudo[80126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:47 compute-0 sudo[80126]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:47 compute-0 sudo[80166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:47 compute-0 sudo[80166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:47 compute-0 sudo[80214]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fezjeevsliugpyhsjayildrolxmwzbzm ; /usr/bin/python3'
Dec 04 10:14:47 compute-0 sudo[80214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:47 compute-0 python3[80223]: ansible-ansible.legacy.async_status Invoked with jid=j52032110365.79561 mode=status _async_dir=/root/.ansible_async
Dec 04 10:14:47 compute-0 sudo[80214]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:47 compute-0 podman[80256]: 2025-12-04 10:14:47.77788138 +0000 UTC m=+0.036466438 container create 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:47 compute-0 systemd[1]: Started libpod-conmon-7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b.scope.
Dec 04 10:14:47 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:47 compute-0 podman[80256]: 2025-12-04 10:14:47.846830602 +0000 UTC m=+0.105415700 container init 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:14:47 compute-0 podman[80256]: 2025-12-04 10:14:47.853755827 +0000 UTC m=+0.112340885 container start 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:14:47 compute-0 sudo[80321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfleotjjqcbigtnvbknqvjwfaptaqwkq ; /usr/bin/python3'
Dec 04 10:14:47 compute-0 sudo[80321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:47 compute-0 systemd[1]: libpod-7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b.scope: Deactivated successfully.
Dec 04 10:14:47 compute-0 great_agnesi[80299]: 167 167
Dec 04 10:14:47 compute-0 podman[80256]: 2025-12-04 10:14:47.762946971 +0000 UTC m=+0.021532039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:14:47 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:47 compute-0 podman[80256]: 2025-12-04 10:14:47.941995086 +0000 UTC m=+0.200580154 container attach 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:47 compute-0 podman[80256]: 2025-12-04 10:14:47.942367293 +0000 UTC m=+0.200952361 container died 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:14:47 compute-0 python3[80325]: ansible-ansible.legacy.async_status Invoked with jid=j52032110365.79561 mode=cleanup _async_dir=/root/.ansible_async
Dec 04 10:14:48 compute-0 sudo[80321]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcb4a812cddb849627bd1eaa35452d249231c35c6023a7a26f2d24356e5327e3-merged.mount: Deactivated successfully.
Dec 04 10:14:48 compute-0 podman[80256]: 2025-12-04 10:14:48.172523788 +0000 UTC m=+0.431108836 container remove 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:48 compute-0 systemd[1]: Reloading.
Dec 04 10:14:48 compute-0 ceph-mon[75358]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:14:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 04 10:14:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 04 10:14:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:48 compute-0 ceph-mon[75358]: Deploying daemon crash.compute-0 on compute-0
Dec 04 10:14:48 compute-0 systemd-rc-local-generator[80358]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:48 compute-0 systemd-sysv-generator[80367]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:48 compute-0 systemd[1]: libpod-conmon-7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b.scope: Deactivated successfully.
Dec 04 10:14:48 compute-0 sudo[80398]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afzcddjpjexlbcwtmidmwzcahdqlaefp ; /usr/bin/python3'
Dec 04 10:14:48 compute-0 sudo[80398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:48 compute-0 systemd[1]: Reloading.
Dec 04 10:14:48 compute-0 systemd-rc-local-generator[80428]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:48 compute-0 systemd-sysv-generator[80434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:48 compute-0 python3[80402]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 04 10:14:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:48 compute-0 sudo[80398]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:48 compute-0 systemd[1]: Starting Ceph crash.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:14:48 compute-0 sshd-session[79853]: Connection reset by authenticating user root 45.140.17.124 port 59938 [preauth]
Dec 04 10:14:49 compute-0 podman[80492]: 2025-12-04 10:14:49.049164458 +0000 UTC m=+0.066560559 container create 821fa491a4b14740c6d07417995d4d9b3d35de1895c35846a9ad7417a8a950ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b282b380ce1e354883994df9c1f04d7f3aa6a707425c103170b52cda539916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b282b380ce1e354883994df9c1f04d7f3aa6a707425c103170b52cda539916/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:49 compute-0 podman[80492]: 2025-12-04 10:14:49.013178151 +0000 UTC m=+0.030574332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b282b380ce1e354883994df9c1f04d7f3aa6a707425c103170b52cda539916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b282b380ce1e354883994df9c1f04d7f3aa6a707425c103170b52cda539916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:49 compute-0 podman[80492]: 2025-12-04 10:14:49.121129495 +0000 UTC m=+0.138525616 container init 821fa491a4b14740c6d07417995d4d9b3d35de1895c35846a9ad7417a8a950ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 10:14:49 compute-0 sudo[80533]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwyzkosmbeilxmosvobjyomqiahohxyn ; /usr/bin/python3'
Dec 04 10:14:49 compute-0 sudo[80533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:49 compute-0 podman[80492]: 2025-12-04 10:14:49.136022734 +0000 UTC m=+0.153418825 container start 821fa491a4b14740c6d07417995d4d9b3d35de1895c35846a9ad7417a8a950ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:49 compute-0 bash[80492]: 821fa491a4b14740c6d07417995d4d9b3d35de1895c35846a9ad7417a8a950ec
Dec 04 10:14:49 compute-0 systemd[1]: Started Ceph crash.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:14:49 compute-0 sudo[80166]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:49 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 04 10:14:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:14:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 04 10:14:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev 4d36a4e2-6b74-4f7d-9d51-5dbcc8b76310 (Updating crash deployment (+1 -> 1))
Dec 04 10:14:49 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event 4d36a4e2-6b74-4f7d-9d51-5dbcc8b76310 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 04 10:14:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 04 10:14:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 04 10:14:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev 3adc2683-100e-447d-9944-af48b9dc8b4a (Updating mgr deployment (+1 -> 2))
Dec 04 10:14:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 04 10:14:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 04 10:14:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 04 10:14:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 04 10:14:49 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr services"} : dispatch
Dec 04 10:14:49 compute-0 ceph-mon[75358]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 04 10:14:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:49 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:49 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.tucvmw on compute-0
Dec 04 10:14:49 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.tucvmw on compute-0
Dec 04 10:14:49 compute-0 python3[80536]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:14:49 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.312+0000 7f8d6e5f7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 04 10:14:49 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.312+0000 7f8d6e5f7640 -1 AuthRegistry(0x7f8d68052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 04 10:14:49 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.314+0000 7f8d6e5f7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 04 10:14:49 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.314+0000 7f8d6e5f7640 -1 AuthRegistry(0x7f8d6e5f5fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 04 10:14:49 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.314+0000 7f8d67fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 04 10:14:49 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.315+0000 7f8d6e5f7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 04 10:14:49 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 04 10:14:49 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 04 10:14:49 compute-0 sudo[80540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:49 compute-0 sudo[80540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:49 compute-0 sudo[80540]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:49 compute-0 podman[80545]: 2025-12-04 10:14:49.334384056 +0000 UTC m=+0.041833325 container create 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:49 compute-0 systemd[1]: Started libpod-conmon-185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a.scope.
Dec 04 10:14:49 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:49 compute-0 sudo[80588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:49 compute-0 sudo[80588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaad59d0291f039f72f90602bd495eba2624768292f2ba9b93165ed6d2b005d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaad59d0291f039f72f90602bd495eba2624768292f2ba9b93165ed6d2b005d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaad59d0291f039f72f90602bd495eba2624768292f2ba9b93165ed6d2b005d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:49 compute-0 podman[80545]: 2025-12-04 10:14:49.31571375 +0000 UTC m=+0.023163049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:49 compute-0 podman[80545]: 2025-12-04 10:14:49.42173598 +0000 UTC m=+0.129185249 container init 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:14:49 compute-0 podman[80545]: 2025-12-04 10:14:49.434807365 +0000 UTC m=+0.142256634 container start 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 04 10:14:49 compute-0 podman[80545]: 2025-12-04 10:14:49.439117812 +0000 UTC m=+0.146567081 container attach 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:49 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:14:49 compute-0 vigorous_kepler[80614]: 
Dec 04 10:14:49 compute-0 vigorous_kepler[80614]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 04 10:14:49 compute-0 systemd[1]: libpod-185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a.scope: Deactivated successfully.
Dec 04 10:14:49 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:49 compute-0 podman[80680]: 2025-12-04 10:14:49.884732619 +0000 UTC m=+0.056983957 container create de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 04 10:14:49 compute-0 podman[80695]: 2025-12-04 10:14:49.89923184 +0000 UTC m=+0.036525418 container died 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:49 compute-0 systemd[1]: Started libpod-conmon-de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6.scope.
Dec 04 10:14:49 compute-0 podman[80680]: 2025-12-04 10:14:49.858254452 +0000 UTC m=+0.030505870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-acaad59d0291f039f72f90602bd495eba2624768292f2ba9b93165ed6d2b005d-merged.mount: Deactivated successfully.
Dec 04 10:14:49 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:49 compute-0 podman[80695]: 2025-12-04 10:14:49.981492721 +0000 UTC m=+0.118786229 container remove 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:14:49 compute-0 podman[80680]: 2025-12-04 10:14:49.986466091 +0000 UTC m=+0.158717509 container init de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:49 compute-0 systemd[1]: libpod-conmon-185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a.scope: Deactivated successfully.
Dec 04 10:14:49 compute-0 podman[80680]: 2025-12-04 10:14:49.99749385 +0000 UTC m=+0.169745188 container start de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:14:50 compute-0 podman[80680]: 2025-12-04 10:14:50.001499142 +0000 UTC m=+0.173750530 container attach de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Dec 04 10:14:50 compute-0 funny_ritchie[80714]: 167 167
Dec 04 10:14:50 compute-0 systemd[1]: libpod-de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6.scope: Deactivated successfully.
Dec 04 10:14:50 compute-0 sudo[80533]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:50 compute-0 podman[80719]: 2025-12-04 10:14:50.050579726 +0000 UTC m=+0.029802878 container died de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5aa5a6578536324dc58b133b1ecb1335713a43a9d428fc1d2a6beeec4ccbdbb-merged.mount: Deactivated successfully.
Dec 04 10:14:50 compute-0 podman[80719]: 2025-12-04 10:14:50.091860359 +0000 UTC m=+0.071083511 container remove de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:14:50 compute-0 systemd[1]: libpod-conmon-de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6.scope: Deactivated successfully.
Dec 04 10:14:50 compute-0 systemd[1]: Reloading.
Dec 04 10:14:50 compute-0 systemd-rc-local-generator[80763]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:50 compute-0 systemd-sysv-generator[80767]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 04 10:14:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr services"} : dispatch
Dec 04 10:14:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:50 compute-0 ceph-mon[75358]: Deploying daemon mgr.compute-0.tucvmw on compute-0
Dec 04 10:14:50 compute-0 sudo[80794]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpcrqehgxlgkzcgsfilgunhlkwhtjhkq ; /usr/bin/python3'
Dec 04 10:14:50 compute-0 sudo[80794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:50 compute-0 systemd[1]: Reloading.
Dec 04 10:14:50 compute-0 systemd-rc-local-generator[80827]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:50 compute-0 systemd-sysv-generator[80830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:50 compute-0 python3[80798]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:14:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:50 compute-0 podman[80836]: 2025-12-04 10:14:50.664186718 +0000 UTC m=+0.043528795 container create 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 04 10:14:50 compute-0 podman[80836]: 2025-12-04 10:14:50.646873127 +0000 UTC m=+0.026215204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:50 compute-0 systemd[1]: Started libpod-conmon-68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da.scope.
Dec 04 10:14:50 compute-0 systemd[1]: Starting Ceph mgr.compute-0.tucvmw for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:14:50 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd79db9a608f51d0670d171ec7bbff0081a87f1296abc68e6ad83b71688b6c5a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd79db9a608f51d0670d171ec7bbff0081a87f1296abc68e6ad83b71688b6c5a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd79db9a608f51d0670d171ec7bbff0081a87f1296abc68e6ad83b71688b6c5a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:50 compute-0 podman[80836]: 2025-12-04 10:14:50.811701895 +0000 UTC m=+0.191043972 container init 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:50 compute-0 podman[80836]: 2025-12-04 10:14:50.823199863 +0000 UTC m=+0.202541940 container start 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 04 10:14:50 compute-0 podman[80836]: 2025-12-04 10:14:50.827052592 +0000 UTC m=+0.206394679 container attach 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:14:50 compute-0 podman[80923]: 2025-12-04 10:14:50.988743085 +0000 UTC m=+0.044831159 container create 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc/merged/var/lib/ceph/mgr/ceph-compute-0.tucvmw supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:51 compute-0 podman[80923]: 2025-12-04 10:14:51.053646433 +0000 UTC m=+0.109734527 container init 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:14:51 compute-0 podman[80923]: 2025-12-04 10:14:51.062946471 +0000 UTC m=+0.119034545 container start 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:14:51 compute-0 podman[80923]: 2025-12-04 10:14:50.967837898 +0000 UTC m=+0.023926022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:14:51 compute-0 bash[80923]: 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68
Dec 04 10:14:51 compute-0 systemd[1]: Started Ceph mgr.compute-0.tucvmw for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:14:51 compute-0 ceph-mgr[80942]: set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:14:51 compute-0 ceph-mgr[80942]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 04 10:14:51 compute-0 ceph-mgr[80942]: pidfile_write: ignore empty --pid-file
Dec 04 10:14:51 compute-0 sudo[80588]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:14:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:51 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'alerts'
Dec 04 10:14:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 04 10:14:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:51 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev 3adc2683-100e-447d-9944-af48b9dc8b4a (Updating mgr deployment (+1 -> 2))
Dec 04 10:14:51 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event 3adc2683-100e-447d-9944-af48b9dc8b4a (Updating mgr deployment (+1 -> 2)) in 2 seconds
Dec 04 10:14:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 04 10:14:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 04 10:14:51 compute-0 ansible-async_wrapper.py[79616]: Done in kid B.
Dec 04 10:14:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/602074459' entity='client.admin' 
Dec 04 10:14:51 compute-0 systemd[1]: libpod-68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da.scope: Deactivated successfully.
Dec 04 10:14:51 compute-0 sudo[80963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:14:51 compute-0 sudo[80963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:51 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'balancer'
Dec 04 10:14:51 compute-0 sudo[80963]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:51 compute-0 podman[80989]: 2025-12-04 10:14:51.281925795 +0000 UTC m=+0.024019114 container died 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 04 10:14:51 compute-0 ceph-mon[75358]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:14:51 compute-0 ceph-mon[75358]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:51 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/602074459' entity='client.admin' 
Dec 04 10:14:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd79db9a608f51d0670d171ec7bbff0081a87f1296abc68e6ad83b71688b6c5a-merged.mount: Deactivated successfully.
Dec 04 10:14:51 compute-0 sudo[80996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:51 compute-0 sudo[80996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:51 compute-0 sudo[80996]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:51 compute-0 podman[80989]: 2025-12-04 10:14:51.321342835 +0000 UTC m=+0.063436134 container remove 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:51 compute-0 systemd[1]: libpod-conmon-68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da.scope: Deactivated successfully.
Dec 04 10:14:51 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'cephadm'
Dec 04 10:14:51 compute-0 sudo[80794]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:51 compute-0 sudo[81030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:14:51 compute-0 sudo[81030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:51 compute-0 sudo[81078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ittcmsipvqgzusxhemijaeaueatzkmpq ; /usr/bin/python3'
Dec 04 10:14:51 compute-0 sudo[81078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:14:51 compute-0 python3[81080]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:14:51 compute-0 podman[81105]: 2025-12-04 10:14:51.696509152 +0000 UTC m=+0.029537933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:51 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:51 compute-0 podman[81105]: 2025-12-04 10:14:51.982454644 +0000 UTC m=+0.315483435 container create 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:52 compute-0 systemd[1]: Started libpod-conmon-5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335.scope.
Dec 04 10:14:52 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b180e832dc98e1dd8848fa9fa4a0a7a3805bd68532969d832a0e4ba552727b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b180e832dc98e1dd8848fa9fa4a0a7a3805bd68532969d832a0e4ba552727b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b180e832dc98e1dd8848fa9fa4a0a7a3805bd68532969d832a0e4ba552727b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:52 compute-0 podman[81105]: 2025-12-04 10:14:52.114234507 +0000 UTC m=+0.447263378 container init 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:14:52 compute-0 podman[81105]: 2025-12-04 10:14:52.126830444 +0000 UTC m=+0.459859245 container start 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:14:52 compute-0 podman[81105]: 2025-12-04 10:14:52.131192222 +0000 UTC m=+0.464221073 container attach 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:52 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'crash'
Dec 04 10:14:52 compute-0 podman[81153]: 2025-12-04 10:14:52.215010482 +0000 UTC m=+0.084006884 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 10:14:52 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'dashboard'
Dec 04 10:14:52 compute-0 podman[81153]: 2025-12-04 10:14:52.33424619 +0000 UTC m=+0.203242652 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3658591949' entity='client.admin' 
Dec 04 10:14:52 compute-0 systemd[1]: libpod-5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335.scope: Deactivated successfully.
Dec 04 10:14:52 compute-0 podman[81105]: 2025-12-04 10:14:52.598888847 +0000 UTC m=+0.931917678 container died 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-16b180e832dc98e1dd8848fa9fa4a0a7a3805bd68532969d832a0e4ba552727b-merged.mount: Deactivated successfully.
Dec 04 10:14:52 compute-0 podman[81105]: 2025-12-04 10:14:52.63795099 +0000 UTC m=+0.970979751 container remove 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:14:52 compute-0 systemd[1]: libpod-conmon-5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335.scope: Deactivated successfully.
Dec 04 10:14:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:52 compute-0 sudo[81078]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:52 compute-0 sudo[81030]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:52 compute-0 ceph-mgr[75651]: [progress INFO root] Writing back 2 completed events
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:52 compute-0 sudo[81335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwrycfgwguhmklkkyezhdqiafwbagluc ; /usr/bin/python3'
Dec 04 10:14:52 compute-0 sudo[81307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:14:52 compute-0 sudo[81335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:52 compute-0 sudo[81307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:52 compute-0 sudo[81307]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:52 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 04 10:14:52 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 04 10:14:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:52 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:52 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 04 10:14:52 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 04 10:14:52 compute-0 sudo[81346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:52 compute-0 sudo[81346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:52 compute-0 sudo[81346]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:53 compute-0 sudo[81371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:53 compute-0 sudo[81371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:53 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'devicehealth'
Dec 04 10:14:53 compute-0 python3[81344]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:14:53 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'diskprediction_local'
Dec 04 10:14:53 compute-0 podman[81396]: 2025-12-04 10:14:53.127253863 +0000 UTC m=+0.049629154 container create 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:53 compute-0 systemd[1]: Started libpod-conmon-16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6.scope.
Dec 04 10:14:53 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:53 compute-0 podman[81396]: 2025-12-04 10:14:53.104037406 +0000 UTC m=+0.026412707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fce9a95fbca0ac726a08b8e843a3d095a2770b264a69d520bb5960562d4bc2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fce9a95fbca0ac726a08b8e843a3d095a2770b264a69d520bb5960562d4bc2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fce9a95fbca0ac726a08b8e843a3d095a2770b264a69d520bb5960562d4bc2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:53 compute-0 podman[81396]: 2025-12-04 10:14:53.221078043 +0000 UTC m=+0.143453334 container init 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:53 compute-0 podman[81396]: 2025-12-04 10:14:53.227317726 +0000 UTC m=+0.149693007 container start 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:14:53 compute-0 podman[81396]: 2025-12-04 10:14:53.231628874 +0000 UTC m=+0.154004205 container attach 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:53 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw[80938]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 04 10:14:53 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw[80938]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 04 10:14:53 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw[80938]:   from numpy import show_config as show_numpy_config
Dec 04 10:14:53 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'influx'
Dec 04 10:14:53 compute-0 podman[81432]: 2025-12-04 10:14:53.298772633 +0000 UTC m=+0.036178133 container create ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:14:53 compute-0 systemd[1]: Started libpod-conmon-ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f.scope.
Dec 04 10:14:53 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'insights'
Dec 04 10:14:53 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:53 compute-0 podman[81432]: 2025-12-04 10:14:53.377834017 +0000 UTC m=+0.115239517 container init ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:53 compute-0 podman[81432]: 2025-12-04 10:14:53.282272536 +0000 UTC m=+0.019678056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:53 compute-0 podman[81432]: 2025-12-04 10:14:53.384048069 +0000 UTC m=+0.121453569 container start ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:14:53 compute-0 podman[81432]: 2025-12-04 10:14:53.387204786 +0000 UTC m=+0.124610306 container attach ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:53 compute-0 wonderful_payne[81467]: 167 167
Dec 04 10:14:53 compute-0 systemd[1]: libpod-ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f.scope: Deactivated successfully.
Dec 04 10:14:53 compute-0 podman[81432]: 2025-12-04 10:14:53.390895471 +0000 UTC m=+0.128300971 container died ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 04 10:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f379d8f8a275b0d075b42ed652a7ddb91d3bd6a0b35c58f1329fbbbc089723a4-merged.mount: Deactivated successfully.
Dec 04 10:14:53 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'iostat'
Dec 04 10:14:53 compute-0 podman[81432]: 2025-12-04 10:14:53.44019201 +0000 UTC m=+0.177597510 container remove ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 10:14:53 compute-0 systemd[1]: libpod-conmon-ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f.scope: Deactivated successfully.
Dec 04 10:14:53 compute-0 sudo[81371]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:14:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:53 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.iwufnj (unknown last config time)...
Dec 04 10:14:53 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.iwufnj (unknown last config time)...
Dec 04 10:14:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.iwufnj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 04 10:14:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.iwufnj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 04 10:14:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 04 10:14:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr services"} : dispatch
Dec 04 10:14:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:53 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.iwufnj on compute-0
Dec 04 10:14:53 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.iwufnj on compute-0
Dec 04 10:14:53 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'k8sevents'
Dec 04 10:14:53 compute-0 sudo[81485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:53 compute-0 sudo[81485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:53 compute-0 sudo[81485]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3658591949' entity='client.admin' 
Dec 04 10:14:53 compute-0 ceph-mon[75358]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.iwufnj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr services"} : dispatch
Dec 04 10:14:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:53 compute-0 sudo[81510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:14:53 compute-0 sudo[81510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 04 10:14:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3103592500' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 04 10:14:53 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:53 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'localpool'
Dec 04 10:14:53 compute-0 podman[81552]: 2025-12-04 10:14:53.98875748 +0000 UTC m=+0.062576527 container create eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:54 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'mds_autoscaler'
Dec 04 10:14:54 compute-0 systemd[1]: Started libpod-conmon-eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979.scope.
Dec 04 10:14:54 compute-0 podman[81552]: 2025-12-04 10:14:53.967982297 +0000 UTC m=+0.041801364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:54 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:54 compute-0 podman[81552]: 2025-12-04 10:14:54.091276957 +0000 UTC m=+0.165096034 container init eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Dec 04 10:14:54 compute-0 podman[81552]: 2025-12-04 10:14:54.102077611 +0000 UTC m=+0.175896648 container start eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:14:54 compute-0 podman[81552]: 2025-12-04 10:14:54.106333388 +0000 UTC m=+0.180152425 container attach eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:54 compute-0 eager_heyrovsky[81569]: 167 167
Dec 04 10:14:54 compute-0 systemd[1]: libpod-eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979.scope: Deactivated successfully.
Dec 04 10:14:54 compute-0 podman[81552]: 2025-12-04 10:14:54.111059954 +0000 UTC m=+0.184879011 container died eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3c681765afbea1cc938e99b9ccc0c822666e576d2f0330c01c9ef3adae4a25b-merged.mount: Deactivated successfully.
Dec 04 10:14:54 compute-0 podman[81552]: 2025-12-04 10:14:54.155398102 +0000 UTC m=+0.229217129 container remove eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:14:54 compute-0 systemd[1]: libpod-conmon-eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979.scope: Deactivated successfully.
Dec 04 10:14:54 compute-0 sudo[81510]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:14:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:54 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'mirroring'
Dec 04 10:14:54 compute-0 sudo[81585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:54 compute-0 sudo[81585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:54 compute-0 sudo[81585]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:54 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'nfs'
Dec 04 10:14:54 compute-0 sudo[81610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:14:54 compute-0 sudo[81610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:54 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'orchestrator'
Dec 04 10:14:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 04 10:14:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:14:54 compute-0 ceph-mon[75358]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 04 10:14:54 compute-0 ceph-mon[75358]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 04 10:14:54 compute-0 ceph-mon[75358]: Reconfiguring mgr.compute-0.iwufnj (unknown last config time)...
Dec 04 10:14:54 compute-0 ceph-mon[75358]: Reconfiguring daemon mgr.compute-0.iwufnj on compute-0
Dec 04 10:14:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3103592500' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 04 10:14:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3103592500' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 04 10:14:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 04 10:14:54 compute-0 practical_bouman[81412]: set require_min_compat_client to mimic
Dec 04 10:14:54 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 04 10:14:54 compute-0 systemd[1]: libpod-16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6.scope: Deactivated successfully.
Dec 04 10:14:54 compute-0 podman[81396]: 2025-12-04 10:14:54.63602579 +0000 UTC m=+1.558401081 container died 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-95fce9a95fbca0ac726a08b8e843a3d095a2770b264a69d520bb5960562d4bc2-merged.mount: Deactivated successfully.
Dec 04 10:14:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:54 compute-0 podman[81396]: 2025-12-04 10:14:54.68159568 +0000 UTC m=+1.603971001 container remove 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:54 compute-0 systemd[1]: libpod-conmon-16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6.scope: Deactivated successfully.
Dec 04 10:14:54 compute-0 sudo[81335]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:54 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'osd_perf_query'
Dec 04 10:14:54 compute-0 podman[81691]: 2025-12-04 10:14:54.84592116 +0000 UTC m=+0.080493841 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:54 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'osd_support'
Dec 04 10:14:54 compute-0 podman[81691]: 2025-12-04 10:14:54.971523152 +0000 UTC m=+0.206095773 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:54 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'pg_autoscaler'
Dec 04 10:14:55 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'progress'
Dec 04 10:14:55 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'prometheus'
Dec 04 10:14:55 compute-0 sudo[81795]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjawdnzyksnjhpsocudmdugsubkejssp ; /usr/bin/python3'
Dec 04 10:14:55 compute-0 sudo[81795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:55 compute-0 sudo[81610]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:55 compute-0 python3[81804]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:14:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:14:55 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:14:55 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:14:55 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:14:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:14:55 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:55 compute-0 podman[81826]: 2025-12-04 10:14:55.441890754 +0000 UTC m=+0.042154950 container create 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 04 10:14:55 compute-0 systemd[1]: Started libpod-conmon-33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05.scope.
Dec 04 10:14:55 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:55 compute-0 sudo[81837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e698ff29c3f72ef970dd87a953badc87f67299df3e2c1a3224596f4fae3034/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e698ff29c3f72ef970dd87a953badc87f67299df3e2c1a3224596f4fae3034/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e698ff29c3f72ef970dd87a953badc87f67299df3e2c1a3224596f4fae3034/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:55 compute-0 podman[81826]: 2025-12-04 10:14:55.504630144 +0000 UTC m=+0.104894370 container init 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:14:55 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'rbd_support'
Dec 04 10:14:55 compute-0 sudo[81837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:55 compute-0 podman[81826]: 2025-12-04 10:14:55.514126096 +0000 UTC m=+0.114390292 container start 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 04 10:14:55 compute-0 sudo[81837]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:55 compute-0 podman[81826]: 2025-12-04 10:14:55.517540467 +0000 UTC m=+0.117804683 container attach 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 04 10:14:55 compute-0 podman[81826]: 2025-12-04 10:14:55.428005594 +0000 UTC m=+0.028269820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:55 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'rgw'
Dec 04 10:14:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3103592500' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 04 10:14:55 compute-0 ceph-mon[75358]: osdmap e3: 0 total, 0 up, 0 in
Dec 04 10:14:55 compute-0 ceph-mon[75358]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:14:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:55 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:55 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'rook'
Dec 04 10:14:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:56 compute-0 sudo[81890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:56 compute-0 sudo[81890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:56 compute-0 sudo[81890]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:56 compute-0 sudo[81915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 04 10:14:56 compute-0 sudo[81915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:56 compute-0 sudo[81915]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: [cephadm INFO root] Added host compute-0
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:56 compute-0 eloquent_tharp[81864]: Added host 'compute-0' with addr '192.168.122.100'
Dec 04 10:14:56 compute-0 eloquent_tharp[81864]: Scheduled mon update...
Dec 04 10:14:56 compute-0 eloquent_tharp[81864]: Scheduled mgr update...
Dec 04 10:14:56 compute-0 eloquent_tharp[81864]: Scheduled osd.default_drive_group update...
Dec 04 10:14:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev 6872cb54-2e25-4297-bf9c-8149799b5fdd (Updating mgr deployment (-1 -> 1))
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.tucvmw from compute-0 -- ports [8765]
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.tucvmw from compute-0 -- ports [8765]
Dec 04 10:14:56 compute-0 systemd[1]: libpod-33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05.scope: Deactivated successfully.
Dec 04 10:14:56 compute-0 podman[81826]: 2025-12-04 10:14:56.453588177 +0000 UTC m=+1.053852373 container died 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 04 10:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8e698ff29c3f72ef970dd87a953badc87f67299df3e2c1a3224596f4fae3034-merged.mount: Deactivated successfully.
Dec 04 10:14:56 compute-0 podman[81826]: 2025-12-04 10:14:56.496763295 +0000 UTC m=+1.097027501 container remove 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:56 compute-0 sudo[81960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:56 compute-0 sudo[81960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:56 compute-0 sudo[81960]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:56 compute-0 systemd[1]: libpod-conmon-33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05.scope: Deactivated successfully.
Dec 04 10:14:56 compute-0 sudo[81795]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:56 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'selftest'
Dec 04 10:14:56 compute-0 sudo[81995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --name mgr.compute-0.tucvmw --force --tcp-ports 8765
Dec 04 10:14:56 compute-0 sudo[81995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:56 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'smb'
Dec 04 10:14:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:14:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:56 compute-0 sudo[82044]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftocgjapihcjgqajvmsqbrflqqnyosjb ; /usr/bin/python3'
Dec 04 10:14:56 compute-0 sudo[82044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:14:56 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.tucvmw for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:14:56 compute-0 python3[82052]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:14:56 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'snap_schedule'
Dec 04 10:14:57 compute-0 podman[82079]: 2025-12-04 10:14:57.016208621 +0000 UTC m=+0.045485800 container create 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Dec 04 10:14:57 compute-0 ceph-mgr[80942]: mgr[py] Loading python module 'stats'
Dec 04 10:14:57 compute-0 systemd[1]: Started libpod-conmon-5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65.scope.
Dec 04 10:14:57 compute-0 podman[82079]: 2025-12-04 10:14:56.997907342 +0000 UTC m=+0.027184541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:14:57 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3156cdae8c223ea0fd24972c6b298027b77071040eccc5eb1c4378a8e23921ba/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3156cdae8c223ea0fd24972c6b298027b77071040eccc5eb1c4378a8e23921ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3156cdae8c223ea0fd24972c6b298027b77071040eccc5eb1c4378a8e23921ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:57 compute-0 podman[82079]: 2025-12-04 10:14:57.119137645 +0000 UTC m=+0.148414934 container init 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:57 compute-0 podman[82103]: 2025-12-04 10:14:57.121764642 +0000 UTC m=+0.107178711 container died 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:14:57 compute-0 podman[82079]: 2025-12-04 10:14:57.128810509 +0000 UTC m=+0.158087728 container start 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:57 compute-0 podman[82079]: 2025-12-04 10:14:57.135942128 +0000 UTC m=+0.165219307 container attach 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 04 10:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc-merged.mount: Deactivated successfully.
Dec 04 10:14:57 compute-0 podman[82103]: 2025-12-04 10:14:57.187320393 +0000 UTC m=+0.172734432 container remove 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 10:14:57 compute-0 bash[82103]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw
Dec 04 10:14:57 compute-0 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mgr.compute-0.tucvmw.service: Main process exited, code=exited, status=143/n/a
Dec 04 10:14:57 compute-0 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mgr.compute-0.tucvmw.service: Failed with result 'exit-code'.
Dec 04 10:14:57 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.tucvmw for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:14:57 compute-0 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mgr.compute-0.tucvmw.service: Consumed 6.822s CPU time, 394.4M memory peak, read 0B from disk, written 964.0K to disk.
Dec 04 10:14:57 compute-0 systemd[1]: Reloading.
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: Added host compute-0
Dec 04 10:14:57 compute-0 ceph-mon[75358]: Saving service mon spec with placement compute-0
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: Saving service mgr spec with placement compute-0
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 04 10:14:57 compute-0 ceph-mon[75358]: Saving service osd.default_drive_group spec with placement compute-0
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: Removing daemon mgr.compute-0.tucvmw from compute-0 -- ports [8765]
Dec 04 10:14:57 compute-0 ceph-mon[75358]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:57 compute-0 systemd-rc-local-generator[82210]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:14:57 compute-0 systemd-sysv-generator[82214]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:14:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 04 10:14:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388289462' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:14:57 compute-0 objective_spence[82118]: 
Dec 04 10:14:57 compute-0 objective_spence[82118]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":51,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-04T10:14:03:532003+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-04T10:14:03.534445+0000","services":{}},"progress_events":{"6872cb54-2e25-4297-bf9c-8149799b5fdd":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 04 10:14:57 compute-0 systemd[1]: libpod-5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65.scope: Deactivated successfully.
Dec 04 10:14:57 compute-0 conmon[82118]: conmon 5fb883146f40567fbf20 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65.scope/container/memory.events
Dec 04 10:14:57 compute-0 podman[82079]: 2025-12-04 10:14:57.733741835 +0000 UTC m=+0.763019024 container died 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 04 10:14:57 compute-0 sudo[81995]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.tucvmw
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.tucvmw
Dec 04 10:14:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"} v 0)
Dec 04 10:14:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"} : dispatch
Dec 04 10:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3156cdae8c223ea0fd24972c6b298027b77071040eccc5eb1c4378a8e23921ba-merged.mount: Deactivated successfully.
Dec 04 10:14:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"}]': finished
Dec 04 10:14:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 04 10:14:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev 6872cb54-2e25-4297-bf9c-8149799b5fdd (Updating mgr deployment (-1 -> 1))
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event 6872cb54-2e25-4297-bf9c-8149799b5fdd (Updating mgr deployment (-1 -> 1)) in 1 seconds
Dec 04 10:14:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 04 10:14:57 compute-0 podman[82079]: 2025-12-04 10:14:57.784089342 +0000 UTC m=+0.813366541 container remove 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:14:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:14:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:14:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:14:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:14:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:14:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:57 compute-0 systemd[1]: libpod-conmon-5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65.scope: Deactivated successfully.
Dec 04 10:14:57 compute-0 sudo[82044]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:57 compute-0 sudo[82236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:14:57 compute-0 sudo[82236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:57 compute-0 sudo[82236]: pam_unix(sudo:session): session closed for user root
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [progress INFO root] Writing back 3 completed events
Dec 04 10:14:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 04 10:14:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:14:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:14:57 compute-0 sudo[82261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:14:57 compute-0 sudo[82261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:14:58 compute-0 podman[82297]: 2025-12-04 10:14:58.260454802 +0000 UTC m=+0.044365969 container create c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:14:58 compute-0 systemd[1]: Started libpod-conmon-c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183.scope.
Dec 04 10:14:58 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:58 compute-0 podman[82297]: 2025-12-04 10:14:58.240757998 +0000 UTC m=+0.024669175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:14:58 compute-0 podman[82297]: 2025-12-04 10:14:58.412807077 +0000 UTC m=+0.196718244 container init c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:14:58 compute-0 podman[82297]: 2025-12-04 10:14:58.424768923 +0000 UTC m=+0.208680090 container start c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:14:58 compute-0 suspicious_leavitt[82313]: 167 167
Dec 04 10:14:58 compute-0 systemd[1]: libpod-c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183.scope: Deactivated successfully.
Dec 04 10:14:58 compute-0 podman[82297]: 2025-12-04 10:14:58.569300848 +0000 UTC m=+0.353212065 container attach c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:14:58 compute-0 podman[82297]: 2025-12-04 10:14:58.570760444 +0000 UTC m=+0.354671641 container died c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 10:14:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1388289462' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:14:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"} : dispatch
Dec 04 10:14:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"}]': finished
Dec 04 10:14:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:14:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:14:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:14:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-767fa2c31979eadff650e7dc84d7f1dd111c6fa3ad5b83ff549c39fe613e38a3-merged.mount: Deactivated successfully.
Dec 04 10:14:58 compute-0 podman[82297]: 2025-12-04 10:14:58.623069676 +0000 UTC m=+0.406980843 container remove c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:14:58 compute-0 systemd[1]: libpod-conmon-c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183.scope: Deactivated successfully.
Dec 04 10:14:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:58 compute-0 podman[82340]: 2025-12-04 10:14:58.843185521 +0000 UTC m=+0.061356274 container create e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:14:58 compute-0 systemd[1]: Started libpod-conmon-e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e.scope.
Dec 04 10:14:58 compute-0 podman[82340]: 2025-12-04 10:14:58.812270298 +0000 UTC m=+0.030441131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:14:58 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:14:58 compute-0 podman[82340]: 2025-12-04 10:14:58.956547629 +0000 UTC m=+0.174718462 container init e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 04 10:14:58 compute-0 podman[82340]: 2025-12-04 10:14:58.970701613 +0000 UTC m=+0.188872406 container start e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:14:58 compute-0 podman[82340]: 2025-12-04 10:14:58.976142085 +0000 UTC m=+0.194312938 container attach e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 04 10:14:59 compute-0 ceph-mon[75358]: Removing key for mgr.compute-0.tucvmw
Dec 04 10:14:59 compute-0 ceph-mon[75358]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:14:59 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:14:59 compute-0 exciting_elgamal[82356]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:14:59 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:14:59 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:14:59 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d6d34217-6607-43be-80be-ae04b730142c
Dec 04 10:15:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"} v 0)
Dec 04 10:15:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2507979783' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"} : dispatch
Dec 04 10:15:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 04 10:15:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2507979783' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"}]': finished
Dec 04 10:15:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 04 10:15:00 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 04 10:15:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:00 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:00 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:01 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2507979783' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"} : dispatch
Dec 04 10:15:01 compute-0 ceph-mon[75358]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:01 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2507979783' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"}]': finished
Dec 04 10:15:01 compute-0 ceph-mon[75358]: osdmap e4: 1 total, 0 up, 1 in
Dec 04 10:15:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:01 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 04 10:15:01 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 04 10:15:01 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 04 10:15:01 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:01 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 04 10:15:01 compute-0 lvm[82451]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:15:01 compute-0 lvm[82451]: VG ceph_vg0 finished
Dec 04 10:15:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:01 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 04 10:15:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3577417938' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 04 10:15:01 compute-0 exciting_elgamal[82356]:  stderr: got monmap epoch 1
Dec 04 10:15:02 compute-0 exciting_elgamal[82356]: --> Creating keyring file for osd.0
Dec 04 10:15:02 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 04 10:15:02 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 04 10:15:02 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid d6d34217-6607-43be-80be-ae04b730142c --setuser ceph --setgroup ceph
Dec 04 10:15:02 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3577417938' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 04 10:15:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:03 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 04 10:15:03 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 04 10:15:03 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:03 compute-0 sshd-session[82889]: Invalid user pzuser from 103.179.218.243 port 41162
Dec 04 10:15:04 compute-0 sshd-session[82889]: Received disconnect from 103.179.218.243 port 41162:11: Bye Bye [preauth]
Dec 04 10:15:04 compute-0 sshd-session[82889]: Disconnected from invalid user pzuser 103.179.218.243 port 41162 [preauth]
Dec 04 10:15:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:05 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:07 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:08 compute-0 ceph-mon[75358]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]:  stderr: 2025-12-04T10:15:02.111+0000 7f93c19bb8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]:  stderr: 2025-12-04T10:15:02.135+0000 7f93c19bb8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:09 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8cc1daa3-82be-4bdc-8e62-fc5001daf8bb
Dec 04 10:15:09 compute-0 ceph-mon[75358]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 04 10:15:09 compute-0 ceph-mon[75358]: Cluster is now healthy
Dec 04 10:15:09 compute-0 ceph-mon[75358]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:09 compute-0 ceph-mon[75358]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:09 compute-0 ceph-mon[75358]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"} v 0)
Dec 04 10:15:09 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3633251648' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"} : dispatch
Dec 04 10:15:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 04 10:15:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:09 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3633251648' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"}]': finished
Dec 04 10:15:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 04 10:15:10 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 04 10:15:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:10 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:10 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:10 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:10 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:10 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3633251648' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"} : dispatch
Dec 04 10:15:10 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3633251648' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"}]': finished
Dec 04 10:15:10 compute-0 ceph-mon[75358]: osdmap e5: 2 total, 0 up, 2 in
Dec 04 10:15:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:10 compute-0 lvm[83398]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:15:10 compute-0 lvm[83398]: VG ceph_vg1 finished
Dec 04 10:15:10 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 04 10:15:10 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 04 10:15:10 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 04 10:15:10 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:10 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 04 10:15:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 04 10:15:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595169929' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 04 10:15:11 compute-0 exciting_elgamal[82356]:  stderr: got monmap epoch 1
Dec 04 10:15:11 compute-0 exciting_elgamal[82356]: --> Creating keyring file for osd.1
Dec 04 10:15:11 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 04 10:15:11 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 04 10:15:11 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 8cc1daa3-82be-4bdc-8e62-fc5001daf8bb --setuser ceph --setgroup ceph
Dec 04 10:15:11 compute-0 ceph-mon[75358]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1595169929' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 04 10:15:11 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:12 compute-0 ceph-mon[75358]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]:  stderr: 2025-12-04T10:15:11.155+0000 7f7a3e26c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]:  stderr: 2025-12-04T10:15:11.172+0000 7f7a3e26c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:13 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2ee6d319-dca2-4c06-9365-2240b94f11cb
Dec 04 10:15:13 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"} v 0)
Dec 04 10:15:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1854585033' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"} : dispatch
Dec 04 10:15:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 04 10:15:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1854585033' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"}]': finished
Dec 04 10:15:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec 04 10:15:14 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec 04 10:15:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:14 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:14 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:14 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:14 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:14 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:14 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:14 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1854585033' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"} : dispatch
Dec 04 10:15:14 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1854585033' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"}]': finished
Dec 04 10:15:14 compute-0 ceph-mon[75358]: osdmap e6: 3 total, 0 up, 3 in
Dec 04 10:15:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:14 compute-0 lvm[84346]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:15:14 compute-0 lvm[84346]: VG ceph_vg2 finished
Dec 04 10:15:14 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec 04 10:15:14 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec 04 10:15:14 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 04 10:15:14 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:14 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec 04 10:15:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 04 10:15:14 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450039958' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 04 10:15:14 compute-0 exciting_elgamal[82356]:  stderr: got monmap epoch 1
Dec 04 10:15:15 compute-0 exciting_elgamal[82356]: --> Creating keyring file for osd.2
Dec 04 10:15:15 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec 04 10:15:15 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec 04 10:15:15 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 2ee6d319-dca2-4c06-9365-2240b94f11cb --setuser ceph --setgroup ceph
Dec 04 10:15:15 compute-0 ceph-mon[75358]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:15 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/450039958' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 04 10:15:15 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]:  stderr: 2025-12-04T10:15:15.158+0000 7f52df8678c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]:  stderr: 2025-12-04T10:15:15.182+0000 7f52df8678c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 04 10:15:16 compute-0 exciting_elgamal[82356]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec 04 10:15:16 compute-0 sshd-session[74477]: Received disconnect from 101.47.163.20 port 47342:11: Bye Bye [preauth]
Dec 04 10:15:16 compute-0 sshd-session[74477]: Disconnected from invalid user teste 101.47.163.20 port 47342 [preauth]
Dec 04 10:15:16 compute-0 systemd[1]: libpod-e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e.scope: Deactivated successfully.
Dec 04 10:15:16 compute-0 systemd[1]: libpod-e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e.scope: Consumed 6.915s CPU time.
Dec 04 10:15:16 compute-0 podman[82340]: 2025-12-04 10:15:16.335834877 +0000 UTC m=+17.554005680 container died e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5-merged.mount: Deactivated successfully.
Dec 04 10:15:16 compute-0 podman[82340]: 2025-12-04 10:15:16.399284155 +0000 UTC m=+17.617454928 container remove e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 04 10:15:16 compute-0 systemd[1]: libpod-conmon-e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e.scope: Deactivated successfully.
Dec 04 10:15:16 compute-0 sudo[82261]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:16 compute-0 sudo[85278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:16 compute-0 sudo[85278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:16 compute-0 sudo[85278]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:16 compute-0 sudo[85303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:15:16 compute-0 sudo[85303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:16 compute-0 podman[85339]: 2025-12-04 10:15:16.873369848 +0000 UTC m=+0.042444247 container create a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 04 10:15:16 compute-0 systemd[1]: Started libpod-conmon-a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b.scope.
Dec 04 10:15:16 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:16 compute-0 podman[85339]: 2025-12-04 10:15:16.855710617 +0000 UTC m=+0.024785066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:16 compute-0 podman[85339]: 2025-12-04 10:15:16.952940181 +0000 UTC m=+0.122014600 container init a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:16 compute-0 podman[85339]: 2025-12-04 10:15:16.961007318 +0000 UTC m=+0.130081727 container start a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:15:16 compute-0 podman[85339]: 2025-12-04 10:15:16.964031601 +0000 UTC m=+0.133106030 container attach a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:15:16 compute-0 optimistic_wilson[85355]: 167 167
Dec 04 10:15:16 compute-0 systemd[1]: libpod-a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b.scope: Deactivated successfully.
Dec 04 10:15:16 compute-0 podman[85339]: 2025-12-04 10:15:16.967085746 +0000 UTC m=+0.136160155 container died a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-91d3a6be2c62225d64dd6b079831ffbf84120df8cb6174a149fdf0a4c5c67fef-merged.mount: Deactivated successfully.
Dec 04 10:15:17 compute-0 podman[85339]: 2025-12-04 10:15:17.002342756 +0000 UTC m=+0.171417195 container remove a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:15:17 compute-0 systemd[1]: libpod-conmon-a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b.scope: Deactivated successfully.
Dec 04 10:15:17 compute-0 podman[85379]: 2025-12-04 10:15:17.220695806 +0000 UTC m=+0.066901814 container create e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:15:17 compute-0 systemd[1]: Started libpod-conmon-e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d.scope.
Dec 04 10:15:17 compute-0 podman[85379]: 2025-12-04 10:15:17.192895257 +0000 UTC m=+0.039101305 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:17 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:17 compute-0 podman[85379]: 2025-12-04 10:15:17.327312339 +0000 UTC m=+0.173518387 container init e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 04 10:15:17 compute-0 podman[85379]: 2025-12-04 10:15:17.334960146 +0000 UTC m=+0.181166144 container start e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 04 10:15:17 compute-0 podman[85379]: 2025-12-04 10:15:17.338388379 +0000 UTC m=+0.184594377 container attach e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]: {
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:     "0": [
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:         {
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "devices": [
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "/dev/loop3"
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             ],
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_name": "ceph_lv0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_size": "21470642176",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "name": "ceph_lv0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "tags": {
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.cluster_name": "ceph",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.crush_device_class": "",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.encrypted": "0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.objectstore": "bluestore",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.osd_id": "0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.type": "block",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.vdo": "0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.with_tpm": "0"
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             },
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "type": "block",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "vg_name": "ceph_vg0"
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:         }
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:     ],
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:     "1": [
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:         {
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "devices": [
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "/dev/loop4"
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             ],
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_name": "ceph_lv1",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_size": "21470642176",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "name": "ceph_lv1",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "tags": {
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.cluster_name": "ceph",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.crush_device_class": "",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.encrypted": "0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.objectstore": "bluestore",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.osd_id": "1",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.type": "block",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.vdo": "0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.with_tpm": "0"
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             },
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "type": "block",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "vg_name": "ceph_vg1"
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:         }
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:     ],
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:     "2": [
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:         {
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "devices": [
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "/dev/loop5"
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             ],
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_name": "ceph_lv2",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_size": "21470642176",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "name": "ceph_lv2",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "tags": {
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.cluster_name": "ceph",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.crush_device_class": "",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.encrypted": "0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.objectstore": "bluestore",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.osd_id": "2",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.type": "block",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.vdo": "0",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:                 "ceph.with_tpm": "0"
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             },
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "type": "block",
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:             "vg_name": "ceph_vg2"
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:         }
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]:     ]
Dec 04 10:15:17 compute-0 nice_ishizaka[85395]: }
Dec 04 10:15:17 compute-0 systemd[1]: libpod-e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d.scope: Deactivated successfully.
Dec 04 10:15:17 compute-0 podman[85379]: 2025-12-04 10:15:17.648025707 +0000 UTC m=+0.494231675 container died e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:15:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f-merged.mount: Deactivated successfully.
Dec 04 10:15:17 compute-0 podman[85379]: 2025-12-04 10:15:17.695768212 +0000 UTC m=+0.541974180 container remove e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:15:17 compute-0 systemd[1]: libpod-conmon-e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d.scope: Deactivated successfully.
Dec 04 10:15:17 compute-0 ceph-mon[75358]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:17 compute-0 sudo[85303]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 04 10:15:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 04 10:15:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:15:17 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:17 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 04 10:15:17 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 04 10:15:17 compute-0 sudo[85419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:17 compute-0 sudo[85419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:17 compute-0 sudo[85419]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:17 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:17 compute-0 sudo[85444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:15:17 compute-0 sudo[85444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:18 compute-0 podman[85510]: 2025-12-04 10:15:18.314926917 +0000 UTC m=+0.045262597 container create 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:18 compute-0 systemd[1]: Started libpod-conmon-60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2.scope.
Dec 04 10:15:18 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:18 compute-0 podman[85510]: 2025-12-04 10:15:18.29664199 +0000 UTC m=+0.026977690 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:18 compute-0 podman[85510]: 2025-12-04 10:15:18.404990395 +0000 UTC m=+0.135326095 container init 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:15:18 compute-0 podman[85510]: 2025-12-04 10:15:18.412897358 +0000 UTC m=+0.143233038 container start 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 04 10:15:18 compute-0 thirsty_williams[85526]: 167 167
Dec 04 10:15:18 compute-0 systemd[1]: libpod-60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2.scope: Deactivated successfully.
Dec 04 10:15:18 compute-0 podman[85510]: 2025-12-04 10:15:18.418224438 +0000 UTC m=+0.148560138 container attach 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:18 compute-0 podman[85510]: 2025-12-04 10:15:18.419364795 +0000 UTC m=+0.149700495 container died 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:15:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7832cf96527604160e077b24b0f83679ad13f3fd63ea300004bd604ea38b7d95-merged.mount: Deactivated successfully.
Dec 04 10:15:18 compute-0 podman[85510]: 2025-12-04 10:15:18.460292034 +0000 UTC m=+0.190627714 container remove 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:18 compute-0 systemd[1]: libpod-conmon-60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2.scope: Deactivated successfully.
Dec 04 10:15:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 04 10:15:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:18 compute-0 podman[85556]: 2025-12-04 10:15:18.804831454 +0000 UTC m=+0.027440470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:18 compute-0 podman[85556]: 2025-12-04 10:15:18.956081366 +0000 UTC m=+0.178690292 container create 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:15:19 compute-0 systemd[1]: Started libpod-conmon-7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074.scope.
Dec 04 10:15:19 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:19 compute-0 podman[85556]: 2025-12-04 10:15:19.078087535 +0000 UTC m=+0.300696491 container init 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:15:19 compute-0 podman[85556]: 2025-12-04 10:15:19.084936433 +0000 UTC m=+0.307545389 container start 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:15:19 compute-0 podman[85556]: 2025-12-04 10:15:19.08894711 +0000 UTC m=+0.311556056 container attach 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:19 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test[85572]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 04 10:15:19 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test[85572]:                             [--no-systemd] [--no-tmpfs]
Dec 04 10:15:19 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test[85572]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 04 10:15:19 compute-0 systemd[1]: libpod-7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074.scope: Deactivated successfully.
Dec 04 10:15:19 compute-0 podman[85556]: 2025-12-04 10:15:19.28762576 +0000 UTC m=+0.510234726 container died 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 04 10:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2-merged.mount: Deactivated successfully.
Dec 04 10:15:19 compute-0 podman[85556]: 2025-12-04 10:15:19.378204231 +0000 UTC m=+0.600813167 container remove 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:19 compute-0 systemd[1]: libpod-conmon-7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074.scope: Deactivated successfully.
Dec 04 10:15:19 compute-0 ceph-mon[75358]: Deploying daemon osd.0 on compute-0
Dec 04 10:15:19 compute-0 ceph-mon[75358]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:19 compute-0 systemd[1]: Reloading.
Dec 04 10:15:19 compute-0 systemd-sysv-generator[85640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:15:19 compute-0 systemd-rc-local-generator[85636]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:15:19 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:20 compute-0 systemd[1]: Reloading.
Dec 04 10:15:20 compute-0 systemd-rc-local-generator[85676]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:15:20 compute-0 systemd-sysv-generator[85681]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:15:20 compute-0 systemd[1]: Starting Ceph osd.0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:15:20 compute-0 podman[85733]: 2025-12-04 10:15:20.586504405 +0000 UTC m=+0.057093345 container create 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 10:15:20 compute-0 podman[85733]: 2025-12-04 10:15:20.566574949 +0000 UTC m=+0.037163929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:20 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:20 compute-0 podman[85733]: 2025-12-04 10:15:20.735089152 +0000 UTC m=+0.205678112 container init 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:15:20 compute-0 podman[85733]: 2025-12-04 10:15:20.740153206 +0000 UTC m=+0.210742146 container start 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:15:20 compute-0 podman[85733]: 2025-12-04 10:15:20.781743291 +0000 UTC m=+0.252332261 container attach 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:15:20 compute-0 ceph-mon[75358]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:20 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:20 compute-0 bash[85733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:20 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:20 compute-0 bash[85733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:21 compute-0 lvm[85835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:15:21 compute-0 lvm[85835]: VG ceph_vg0 finished
Dec 04 10:15:21 compute-0 lvm[85836]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:15:21 compute-0 lvm[85836]: VG ceph_vg1 finished
Dec 04 10:15:21 compute-0 lvm[85838]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:15:21 compute-0 lvm[85838]: VG ceph_vg2 finished
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:21 compute-0 bash[85733]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 04 10:15:21 compute-0 bash[85733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:21 compute-0 bash[85733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 04 10:15:21 compute-0 bash[85733]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 04 10:15:21 compute-0 bash[85733]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 04 10:15:21 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:21 compute-0 bash[85733]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:21 compute-0 bash[85733]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 04 10:15:21 compute-0 bash[85733]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 04 10:15:21 compute-0 bash[85733]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 04 10:15:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 04 10:15:21 compute-0 bash[85733]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 04 10:15:21 compute-0 systemd[1]: libpod-70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e.scope: Deactivated successfully.
Dec 04 10:15:21 compute-0 systemd[1]: libpod-70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e.scope: Consumed 1.708s CPU time.
Dec 04 10:15:21 compute-0 conmon[85749]: conmon 70a123ec28e72812a4f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e.scope/container/memory.events
Dec 04 10:15:21 compute-0 podman[85733]: 2025-12-04 10:15:21.950705455 +0000 UTC m=+1.421294445 container died 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110-merged.mount: Deactivated successfully.
Dec 04 10:15:22 compute-0 podman[85733]: 2025-12-04 10:15:22.014039642 +0000 UTC m=+1.484628582 container remove 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:15:22 compute-0 podman[86000]: 2025-12-04 10:15:22.26142786 +0000 UTC m=+0.047496090 container create f4a07ff696942e750f7c85c5375dea5220ff8c39e9eccf82a7d5cabc76e6f733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Dec 04 10:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:22 compute-0 podman[86000]: 2025-12-04 10:15:22.236650625 +0000 UTC m=+0.022718895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:22 compute-0 podman[86000]: 2025-12-04 10:15:22.344758265 +0000 UTC m=+0.130826515 container init f4a07ff696942e750f7c85c5375dea5220ff8c39e9eccf82a7d5cabc76e6f733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 04 10:15:22 compute-0 podman[86000]: 2025-12-04 10:15:22.361876142 +0000 UTC m=+0.147944402 container start f4a07ff696942e750f7c85c5375dea5220ff8c39e9eccf82a7d5cabc76e6f733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:22 compute-0 bash[86000]: f4a07ff696942e750f7c85c5375dea5220ff8c39e9eccf82a7d5cabc76e6f733
Dec 04 10:15:22 compute-0 systemd[1]: Started Ceph osd.0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:15:22 compute-0 ceph-osd[86021]: set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: pidfile_write: ignore empty --pid-file
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 sudo[85444]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 04 10:15:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 04 10:15:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:15:22 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:22 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 04 10:15:22 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 sudo[86035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 sudo[86035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:22 compute-0 sudo[86035]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 sudo[86066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:15:22 compute-0 sudo[86066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 04 10:15:22 compute-0 ceph-osd[86021]: load: jerasure load: lrc 
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-osd[86021]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 04 10:15:22 compute-0 ceph-osd[86021]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount shared_bdev_used = 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: RocksDB version: 7.9.2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Git sha 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: DB SUMMARY
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: DB Session ID:  PFQFCW5ZC5JN7BO8U6AB
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: CURRENT file:  CURRENT
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: IDENTITY file:  IDENTITY
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                         Options.error_if_exists: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.create_if_missing: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                         Options.paranoid_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                                     Options.env: 0x56116139fea0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                                Options.info_log: 0x5611623f08a0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_file_opening_threads: 16
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                              Options.statistics: (nil)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.use_fsync: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.max_log_file_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                         Options.allow_fallocate: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.use_direct_reads: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.create_missing_column_families: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                              Options.db_log_dir: 
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                                 Options.wal_dir: db.wal
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.advise_random_on_open: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.write_buffer_manager: 0x561161404b40
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                            Options.rate_limiter: (nil)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.unordered_write: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.row_cache: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                              Options.wal_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.allow_ingest_behind: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.two_write_queues: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.manual_wal_flush: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.wal_compression: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.atomic_flush: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.log_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.allow_data_in_errors: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.db_host_id: __hostname__
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.max_background_jobs: 4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.max_background_compactions: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.max_subcompactions: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.max_open_files: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.bytes_per_sync: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.max_background_flushes: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Compression algorithms supported:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kZSTD supported: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kXpressCompression supported: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kBZip2Compression supported: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kLZ4Compression supported: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kZlibCompression supported: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kLZ4HCCompression supported: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kSnappyCompression supported: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ef8b57b4-a295-48e7-9e30-b2d54314d54d
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322772587, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322774798, "job": 1, "event": "recovery_finished"}
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: freelist init
Dec 04 10:15:22 compute-0 ceph-osd[86021]: freelist _read_cfg
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs umount
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) close
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluefs mount shared_bdev_used = 27262976
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: RocksDB version: 7.9.2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Git sha 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: DB SUMMARY
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: DB Session ID:  PFQFCW5ZC5JN7BO8U6AA
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: CURRENT file:  CURRENT
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: IDENTITY file:  IDENTITY
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                         Options.error_if_exists: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.create_if_missing: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                         Options.paranoid_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                                     Options.env: 0x5611625c0a10
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                                Options.info_log: 0x5611623f0a20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_file_opening_threads: 16
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                              Options.statistics: (nil)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.use_fsync: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.max_log_file_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                         Options.allow_fallocate: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.use_direct_reads: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.create_missing_column_families: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                              Options.db_log_dir: 
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                                 Options.wal_dir: db.wal
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.advise_random_on_open: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.write_buffer_manager: 0x561161405900
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                            Options.rate_limiter: (nil)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.unordered_write: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.row_cache: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                              Options.wal_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.allow_ingest_behind: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.two_write_queues: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.manual_wal_flush: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.wal_compression: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.atomic_flush: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.log_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.allow_data_in_errors: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.db_host_id: __hostname__
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.max_background_jobs: 4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.max_background_compactions: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.max_subcompactions: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.max_open_files: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.bytes_per_sync: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.max_background_flushes: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Compression algorithms supported:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kZSTD supported: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kXpressCompression supported: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kBZip2Compression supported: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kLZ4Compression supported: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kZlibCompression supported: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kLZ4HCCompression supported: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         kSnappyCompression supported: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f10c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f10c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f10c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5611613a3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ef8b57b4-a295-48e7-9e30-b2d54314d54d
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322828900, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322836492, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ef8b57b4-a295-48e7-9e30-b2d54314d54d", "db_session_id": "PFQFCW5ZC5JN7BO8U6AA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322839888, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ef8b57b4-a295-48e7-9e30-b2d54314d54d", "db_session_id": "PFQFCW5ZC5JN7BO8U6AA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322842858, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ef8b57b4-a295-48e7-9e30-b2d54314d54d", "db_session_id": "PFQFCW5ZC5JN7BO8U6AA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322844908, "job": 1, "event": "recovery_finished"}
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56116260a000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: DB pointer 0x5611625aa000
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 04 10:15:22 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:15:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:15:22 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 04 10:15:22 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 04 10:15:22 compute-0 ceph-osd[86021]: _get_class not permitted to load lua
Dec 04 10:15:22 compute-0 ceph-osd[86021]: _get_class not permitted to load sdk
Dec 04 10:15:22 compute-0 ceph-osd[86021]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 04 10:15:22 compute-0 ceph-osd[86021]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 04 10:15:22 compute-0 ceph-osd[86021]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 04 10:15:22 compute-0 ceph-osd[86021]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 04 10:15:22 compute-0 ceph-osd[86021]: osd.0 0 load_pgs
Dec 04 10:15:22 compute-0 ceph-osd[86021]: osd.0 0 load_pgs opened 0 pgs
Dec 04 10:15:22 compute-0 ceph-osd[86021]: osd.0 0 log_to_monitors true
Dec 04 10:15:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0[86017]: 2025-12-04T10:15:22.885+0000 7f8fb959a8c0 -1 osd.0 0 log_to_monitors true
Dec 04 10:15:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 04 10:15:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 04 10:15:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:23 compute-0 podman[86557]: 2025-12-04 10:15:23.004669113 +0000 UTC m=+0.043847511 container create be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 04 10:15:23 compute-0 systemd[1]: Started libpod-conmon-be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1.scope.
Dec 04 10:15:23 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:23 compute-0 podman[86557]: 2025-12-04 10:15:22.985445613 +0000 UTC m=+0.024624021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:23 compute-0 podman[86557]: 2025-12-04 10:15:23.081437227 +0000 UTC m=+0.120615625 container init be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:23 compute-0 podman[86557]: 2025-12-04 10:15:23.089163865 +0000 UTC m=+0.128342243 container start be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:15:23 compute-0 podman[86557]: 2025-12-04 10:15:23.092051055 +0000 UTC m=+0.131229433 container attach be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 04 10:15:23 compute-0 nervous_brown[86573]: 167 167
Dec 04 10:15:23 compute-0 systemd[1]: libpod-be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1.scope: Deactivated successfully.
Dec 04 10:15:23 compute-0 podman[86557]: 2025-12-04 10:15:23.097094119 +0000 UTC m=+0.136272537 container died be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-27fa8b5f9710f91c0cd172c50191d1413db68e541b0dfa2865aa2a469c9f2b05-merged.mount: Deactivated successfully.
Dec 04 10:15:23 compute-0 podman[86557]: 2025-12-04 10:15:23.139389971 +0000 UTC m=+0.178568349 container remove be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:23 compute-0 systemd[1]: libpod-conmon-be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1.scope: Deactivated successfully.
Dec 04 10:15:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 04 10:15:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:23 compute-0 ceph-mon[75358]: Deploying daemon osd.1 on compute-0
Dec 04 10:15:23 compute-0 ceph-mon[75358]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:23 compute-0 ceph-mon[75358]: from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 04 10:15:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 04 10:15:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:23 compute-0 podman[86602]: 2025-12-04 10:15:23.442805348 +0000 UTC m=+0.058902769 container create 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 04 10:15:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 04 10:15:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec 04 10:15:23 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec 04 10:15:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 04 10:15:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 04 10:15:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 04 10:15:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:23 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:23 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:23 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:23 compute-0 systemd[1]: Started libpod-conmon-246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab.scope.
Dec 04 10:15:23 compute-0 podman[86602]: 2025-12-04 10:15:23.424919581 +0000 UTC m=+0.041017032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:23 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:23 compute-0 podman[86602]: 2025-12-04 10:15:23.572787351 +0000 UTC m=+0.188884862 container init 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:15:23 compute-0 podman[86602]: 2025-12-04 10:15:23.584549047 +0000 UTC m=+0.200646508 container start 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:23 compute-0 podman[86602]: 2025-12-04 10:15:23.588536525 +0000 UTC m=+0.204633966 container attach 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Dec 04 10:15:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test[86618]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 04 10:15:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test[86618]:                             [--no-systemd] [--no-tmpfs]
Dec 04 10:15:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test[86618]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 04 10:15:23 compute-0 systemd[1]: libpod-246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab.scope: Deactivated successfully.
Dec 04 10:15:23 compute-0 podman[86602]: 2025-12-04 10:15:23.796004129 +0000 UTC m=+0.412101560 container died 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320-merged.mount: Deactivated successfully.
Dec 04 10:15:23 compute-0 podman[86602]: 2025-12-04 10:15:23.846938983 +0000 UTC m=+0.463036414 container remove 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:15:23 compute-0 systemd[1]: libpod-conmon-246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab.scope: Deactivated successfully.
Dec 04 10:15:23 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:23 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 04 10:15:23 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 04 10:15:24 compute-0 systemd[1]: Reloading.
Dec 04 10:15:24 compute-0 systemd-rc-local-generator[86681]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:15:24 compute-0 systemd-sysv-generator[86684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:15:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 04 10:15:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 04 10:15:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec 04 10:15:24 compute-0 ceph-osd[86021]: osd.0 0 done with init, starting boot process
Dec 04 10:15:24 compute-0 ceph-osd[86021]: osd.0 0 start_boot
Dec 04 10:15:24 compute-0 ceph-osd[86021]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 04 10:15:24 compute-0 ceph-osd[86021]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 04 10:15:24 compute-0 ceph-osd[86021]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 04 10:15:24 compute-0 ceph-osd[86021]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 04 10:15:24 compute-0 ceph-osd[86021]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 04 10:15:24 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec 04 10:15:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:24 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:24 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:24 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:24 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:24 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:24 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:24 compute-0 ceph-mon[75358]: from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 04 10:15:24 compute-0 ceph-mon[75358]: osdmap e7: 3 total, 0 up, 3 in
Dec 04 10:15:24 compute-0 ceph-mon[75358]: from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 04 10:15:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:24 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:24 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:24 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:24 compute-0 systemd[1]: Reloading.
Dec 04 10:15:24 compute-0 systemd-rc-local-generator[86723]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:15:24 compute-0 systemd-sysv-generator[86727]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:15:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:24 compute-0 systemd[1]: Starting Ceph osd.1 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:15:25 compute-0 podman[86778]: 2025-12-04 10:15:25.170370798 +0000 UTC m=+0.072564543 container create 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:15:25 compute-0 podman[86778]: 2025-12-04 10:15:25.138719415 +0000 UTC m=+0.040913170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:25 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:25 compute-0 podman[86778]: 2025-12-04 10:15:25.289899335 +0000 UTC m=+0.192093140 container init 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:25 compute-0 podman[86778]: 2025-12-04 10:15:25.297642604 +0000 UTC m=+0.199836369 container start 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:15:25 compute-0 podman[86778]: 2025-12-04 10:15:25.310191371 +0000 UTC m=+0.212385136 container attach 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 10:15:25 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:25 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:25 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:25 compute-0 bash[86778]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:25 compute-0 ceph-mon[75358]: from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 04 10:15:25 compute-0 ceph-mon[75358]: osdmap e8: 3 total, 0 up, 3 in
Dec 04 10:15:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:25 compute-0 ceph-mon[75358]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:25 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:25 compute-0 bash[86778]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:25 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:26 compute-0 lvm[86876]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:15:26 compute-0 lvm[86879]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:15:26 compute-0 lvm[86879]: VG ceph_vg1 finished
Dec 04 10:15:26 compute-0 lvm[86876]: VG ceph_vg0 finished
Dec 04 10:15:26 compute-0 lvm[86881]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:15:26 compute-0 lvm[86881]: VG ceph_vg2 finished
Dec 04 10:15:26 compute-0 lvm[86882]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:15:26 compute-0 lvm[86882]: VG ceph_vg1 finished
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:26 compute-0 bash[86778]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 04 10:15:26 compute-0 bash[86778]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:26 compute-0 bash[86778]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 04 10:15:26 compute-0 bash[86778]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 04 10:15:26 compute-0 bash[86778]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 04 10:15:26 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:26 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:26 compute-0 bash[86778]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:26 compute-0 bash[86778]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 04 10:15:26 compute-0 bash[86778]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 04 10:15:26 compute-0 bash[86778]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 04 10:15:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 04 10:15:26 compute-0 bash[86778]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 04 10:15:26 compute-0 ceph-mon[75358]: purged_snaps scrub starts
Dec 04 10:15:26 compute-0 ceph-mon[75358]: purged_snaps scrub ok
Dec 04 10:15:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:26 compute-0 systemd[1]: libpod-1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e.scope: Deactivated successfully.
Dec 04 10:15:26 compute-0 systemd[1]: libpod-1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e.scope: Consumed 1.837s CPU time.
Dec 04 10:15:26 compute-0 podman[86995]: 2025-12-04 10:15:26.654862794 +0000 UTC m=+0.030989708 container died 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:15:26
Dec 04 10:15:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:15:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:15:26 compute-0 ceph-mgr[75651]: [balancer INFO root] No pools available
Dec 04 10:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662-merged.mount: Deactivated successfully.
Dec 04 10:15:27 compute-0 podman[86995]: 2025-12-04 10:15:27.1078248 +0000 UTC m=+0.483951704 container remove 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:27 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:27 compute-0 podman[87051]: 2025-12-04 10:15:27.471412765 +0000 UTC m=+0.112362483 container create f6ca53226c0f28dd275d5613685249253576ebb8e33a5dea7dc71ce5d58c96c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:15:27 compute-0 podman[87051]: 2025-12-04 10:15:27.384612906 +0000 UTC m=+0.025562674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:27 compute-0 podman[87051]: 2025-12-04 10:15:27.581030082 +0000 UTC m=+0.221979820 container init f6ca53226c0f28dd275d5613685249253576ebb8e33a5dea7dc71ce5d58c96c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 04 10:15:27 compute-0 podman[87051]: 2025-12-04 10:15:27.589722634 +0000 UTC m=+0.230672312 container start f6ca53226c0f28dd275d5613685249253576ebb8e33a5dea7dc71ce5d58c96c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:15:27 compute-0 bash[87051]: f6ca53226c0f28dd275d5613685249253576ebb8e33a5dea7dc71ce5d58c96c5
Dec 04 10:15:27 compute-0 ceph-mon[75358]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:27 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:27 compute-0 ceph-osd[87071]: set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:15:27 compute-0 ceph-osd[87071]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 04 10:15:27 compute-0 ceph-osd[87071]: pidfile_write: ignore empty --pid-file
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:27 compute-0 systemd[1]: Started Ceph osd.1 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:27 compute-0 sudo[86066]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:15:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:15:27 compute-0 ceph-osd[87071]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 04 10:15:27 compute-0 ceph-osd[87071]: load: jerasure load: lrc 
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:27 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:28 compute-0 sudo[87123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gohggrfznhdrfzaytspjcfrhrvaegdzp ; /usr/bin/python3'
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:28 compute-0 sudo[87123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 04 10:15:28 compute-0 ceph-osd[87071]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount shared_bdev_used = 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: RocksDB version: 7.9.2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Git sha 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: DB SUMMARY
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: DB Session ID:  BRSQNPZ8VAPD8X1H06XT
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: CURRENT file:  CURRENT
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: IDENTITY file:  IDENTITY
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                         Options.error_if_exists: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.create_if_missing: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                         Options.paranoid_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                                     Options.env: 0x559004ea3ea0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                                Options.info_log: 0x559005f2a8a0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_file_opening_threads: 16
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                              Options.statistics: (nil)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.use_fsync: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.max_log_file_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                         Options.allow_fallocate: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.use_direct_reads: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.create_missing_column_families: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                              Options.db_log_dir: 
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                                 Options.wal_dir: db.wal
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.advise_random_on_open: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.write_buffer_manager: 0x559004f04b40
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                            Options.rate_limiter: (nil)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.unordered_write: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.row_cache: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                              Options.wal_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.allow_ingest_behind: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.two_write_queues: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.manual_wal_flush: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.wal_compression: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.atomic_flush: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.log_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.allow_data_in_errors: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.db_host_id: __hostname__
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.max_background_jobs: 4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.max_background_compactions: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.max_subcompactions: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.max_open_files: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.bytes_per_sync: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.max_background_flushes: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Compression algorithms supported:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kZSTD supported: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kXpressCompression supported: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kBZip2Compression supported: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kLZ4Compression supported: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kZlibCompression supported: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kLZ4HCCompression supported: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kSnappyCompression supported: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328150787, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328153808, "job": 1, "event": "recovery_finished"}
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 04 10:15:28 compute-0 python3[87130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: freelist init
Dec 04 10:15:28 compute-0 ceph-osd[87071]: freelist _read_cfg
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs umount
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) close
Dec 04 10:15:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluefs mount shared_bdev_used = 27262976
Dec 04 10:15:28 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: RocksDB version: 7.9.2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Git sha 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: DB SUMMARY
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: DB Session ID:  BRSQNPZ8VAPD8X1H06XS
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: CURRENT file:  CURRENT
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: IDENTITY file:  IDENTITY
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                         Options.error_if_exists: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.create_if_missing: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                         Options.paranoid_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                                     Options.env: 0x559005cefdc0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                                Options.info_log: 0x559005f2b340
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_file_opening_threads: 16
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                              Options.statistics: (nil)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.use_fsync: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.max_log_file_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                         Options.allow_fallocate: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.use_direct_reads: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.create_missing_column_families: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                              Options.db_log_dir: 
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                                 Options.wal_dir: db.wal
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.advise_random_on_open: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.write_buffer_manager: 0x559004f05900
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                            Options.rate_limiter: (nil)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.unordered_write: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.row_cache: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                              Options.wal_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.allow_ingest_behind: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.two_write_queues: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.manual_wal_flush: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.wal_compression: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.atomic_flush: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.log_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.allow_data_in_errors: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.db_host_id: __hostname__
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.max_background_jobs: 4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.max_background_compactions: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.max_subcompactions: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.max_open_files: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.bytes_per_sync: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.max_background_flushes: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Compression algorithms supported:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kZSTD supported: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kXpressCompression supported: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kBZip2Compression supported: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kLZ4Compression supported: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kZlibCompression supported: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kLZ4HCCompression supported: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         kSnappyCompression supported: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77800)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77800)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77800)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559004ea74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328202732, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 04 10:15:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 04 10:15:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 04 10:15:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:15:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:28 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec 04 10:15:28 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec 04 10:15:28 compute-0 podman[87333]: 2025-12-04 10:15:28.224488898 +0000 UTC m=+0.031473789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:15:28 compute-0 sudo[87529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:28 compute-0 sudo[87529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:28 compute-0 sudo[87529]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:28 compute-0 podman[87333]: 2025-12-04 10:15:28.382316091 +0000 UTC m=+0.189300962 container create 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328384347, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843328, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e", "db_session_id": "BRSQNPZ8VAPD8X1H06XS", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328391359, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843328, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e", "db_session_id": "BRSQNPZ8VAPD8X1H06XS", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328438476, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843328, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e", "db_session_id": "BRSQNPZ8VAPD8X1H06XS", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:15:28 compute-0 sudo[87554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:15:28 compute-0 sudo[87554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:28 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:28 compute-0 systemd[1]: Started libpod-conmon-52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac.scope.
Dec 04 10:15:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:28 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce60a5f7abd3f858575f11544dc36ebd2518d343c9a3513766be8049490a5ea1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce60a5f7abd3f858575f11544dc36ebd2518d343c9a3513766be8049490a5ea1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce60a5f7abd3f858575f11544dc36ebd2518d343c9a3513766be8049490a5ea1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328640202, "job": 1, "event": "recovery_finished"}
Dec 04 10:15:28 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 04 10:15:28 compute-0 podman[87333]: 2025-12-04 10:15:28.649569434 +0000 UTC m=+0.456554335 container init 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:28 compute-0 podman[87333]: 2025-12-04 10:15:28.658449321 +0000 UTC m=+0.465434192 container start 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:15:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:28 compute-0 podman[87333]: 2025-12-04 10:15:28.683199485 +0000 UTC m=+0.490184386 container attach 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:15:28 compute-0 podman[87643]: 2025-12-04 10:15:28.879152068 +0000 UTC m=+0.022911400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:28 compute-0 podman[87643]: 2025-12-04 10:15:28.981276081 +0000 UTC m=+0.125035393 container create 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 04 10:15:29 compute-0 systemd[1]: Started libpod-conmon-29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932.scope.
Dec 04 10:15:29 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 04 10:15:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:29 compute-0 ceph-mon[75358]: Deploying daemon osd.2 on compute-0
Dec 04 10:15:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:29 compute-0 ceph-mon[75358]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:29 compute-0 podman[87643]: 2025-12-04 10:15:29.179617292 +0000 UTC m=+0.323376634 container init 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:15:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55900610fc00
Dec 04 10:15:29 compute-0 ceph-osd[87071]: rocksdb: DB pointer 0x5590060e4000
Dec 04 10:15:29 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 04 10:15:29 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 04 10:15:29 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 04 10:15:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:15:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:15:29 compute-0 podman[87643]: 2025-12-04 10:15:29.187814932 +0000 UTC m=+0.331574244 container start 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:15:29 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 04 10:15:29 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 04 10:15:29 compute-0 ceph-osd[87071]: _get_class not permitted to load lua
Dec 04 10:15:29 compute-0 ceph-osd[87071]: _get_class not permitted to load sdk
Dec 04 10:15:29 compute-0 fervent_greider[87659]: 167 167
Dec 04 10:15:29 compute-0 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 04 10:15:29 compute-0 conmon[87659]: conmon 29217b6c5e0dee3a62de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932.scope/container/memory.events
Dec 04 10:15:29 compute-0 systemd[1]: libpod-29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932.scope: Deactivated successfully.
Dec 04 10:15:29 compute-0 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 04 10:15:29 compute-0 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 04 10:15:29 compute-0 ceph-osd[87071]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 04 10:15:29 compute-0 ceph-osd[87071]: osd.1 0 load_pgs
Dec 04 10:15:29 compute-0 ceph-osd[87071]: osd.1 0 load_pgs opened 0 pgs
Dec 04 10:15:29 compute-0 ceph-osd[87071]: osd.1 0 log_to_monitors true
Dec 04 10:15:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1[87067]: 2025-12-04T10:15:29.193+0000 7f1f9b3d38c0 -1 osd.1 0 log_to_monitors true
Dec 04 10:15:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 04 10:15:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 04 10:15:29 compute-0 podman[87643]: 2025-12-04 10:15:29.211619933 +0000 UTC m=+0.355379245 container attach 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:15:29 compute-0 podman[87643]: 2025-12-04 10:15:29.212053414 +0000 UTC m=+0.355812726 container died 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 04 10:15:29 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245618866' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:15:29 compute-0 nostalgic_jang[87581]: 
Dec 04 10:15:29 compute-0 nostalgic_jang[87581]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":82,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764843314,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-04T10:14:03:532003+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-04T10:14:03.534445+0000","services":{}},"progress_events":{}}
Dec 04 10:15:29 compute-0 systemd[1]: libpod-52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac.scope: Deactivated successfully.
Dec 04 10:15:29 compute-0 conmon[87581]: conmon 52c2f453236930da8f45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac.scope/container/memory.events
Dec 04 10:15:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc03c0988913cfb1643bbe2e15cfe1b6a71db12843130242cf8351aa4acb3f69-merged.mount: Deactivated successfully.
Dec 04 10:15:29 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:29 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:29 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:29 compute-0 podman[87643]: 2025-12-04 10:15:29.632068396 +0000 UTC m=+0.775827708 container remove 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:15:29 compute-0 podman[87333]: 2025-12-04 10:15:29.66868619 +0000 UTC m=+1.475671071 container died 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:29 compute-0 systemd[1]: libpod-conmon-29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932.scope: Deactivated successfully.
Dec 04 10:15:29 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce60a5f7abd3f858575f11544dc36ebd2518d343c9a3513766be8049490a5ea1-merged.mount: Deactivated successfully.
Dec 04 10:15:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 04 10:15:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 04 10:15:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 04 10:15:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:30 compute-0 ceph-mon[75358]: from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 04 10:15:30 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3245618866' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:15:30 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:30 compute-0 podman[87735]: 2025-12-04 10:15:30.232333848 +0000 UTC m=+0.142360886 container create 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 04 10:15:30 compute-0 podman[87735]: 2025-12-04 10:15:30.166406439 +0000 UTC m=+0.076433487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:30 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:30 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:30 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 04 10:15:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Dec 04 10:15:30 compute-0 systemd[1]: Started libpod-conmon-0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31.scope.
Dec 04 10:15:30 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Dec 04 10:15:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 04 10:15:30 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:30 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 04 10:15:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 04 10:15:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:30 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:30 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:30 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:30 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:30 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:30 compute-0 podman[87735]: 2025-12-04 10:15:30.983816011 +0000 UTC m=+0.893843129 container init 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:15:30 compute-0 podman[87735]: 2025-12-04 10:15:30.992703818 +0000 UTC m=+0.902730836 container start 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:15:31 compute-0 podman[87735]: 2025-12-04 10:15:31.044353469 +0000 UTC m=+0.954380517 container attach 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:15:31 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test[87751]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 04 10:15:31 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test[87751]:                             [--no-systemd] [--no-tmpfs]
Dec 04 10:15:31 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test[87751]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 04 10:15:31 compute-0 systemd[1]: libpod-0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31.scope: Deactivated successfully.
Dec 04 10:15:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:31 compute-0 ceph-mon[75358]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:31 compute-0 ceph-mon[75358]: from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 04 10:15:31 compute-0 ceph-mon[75358]: osdmap e9: 3 total, 0 up, 3 in
Dec 04 10:15:31 compute-0 ceph-mon[75358]: from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 04 10:15:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:31 compute-0 podman[87333]: 2025-12-04 10:15:31.282599954 +0000 UTC m=+3.089584825 container remove 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:31 compute-0 podman[87735]: 2025-12-04 10:15:31.283823484 +0000 UTC m=+1.193850512 container died 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 04 10:15:31 compute-0 systemd[1]: libpod-conmon-52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac.scope: Deactivated successfully.
Dec 04 10:15:31 compute-0 sudo[87123]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:31 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:31 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:31 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446-merged.mount: Deactivated successfully.
Dec 04 10:15:31 compute-0 podman[87735]: 2025-12-04 10:15:31.726779366 +0000 UTC m=+1.636806404 container remove 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:31 compute-0 systemd[1]: libpod-conmon-0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31.scope: Deactivated successfully.
Dec 04 10:15:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 04 10:15:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:31 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 04 10:15:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Dec 04 10:15:32 compute-0 ceph-osd[87071]: osd.1 0 done with init, starting boot process
Dec 04 10:15:32 compute-0 ceph-osd[87071]: osd.1 0 start_boot
Dec 04 10:15:32 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 04 10:15:32 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 04 10:15:32 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 04 10:15:32 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 04 10:15:32 compute-0 ceph-osd[87071]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 04 10:15:32 compute-0 sshd-session[87770]: Received disconnect from 74.249.218.27 port 54226:11: Bye Bye [preauth]
Dec 04 10:15:32 compute-0 sshd-session[87770]: Disconnected from authenticating user root 74.249.218.27 port 54226 [preauth]
Dec 04 10:15:32 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Dec 04 10:15:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:32 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:32 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:32 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:32 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:32 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:32 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:32 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:32 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:32 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:32 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:33 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:33 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:33 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:33 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:33 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:33 compute-0 ceph-mon[75358]: purged_snaps scrub starts
Dec 04 10:15:33 compute-0 ceph-mon[75358]: purged_snaps scrub ok
Dec 04 10:15:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:33 compute-0 ceph-mon[75358]: from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 04 10:15:33 compute-0 ceph-mon[75358]: osdmap e10: 3 total, 0 up, 3 in
Dec 04 10:15:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:33 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:33 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:33 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:33 compute-0 systemd[1]: Reloading.
Dec 04 10:15:33 compute-0 systemd-rc-local-generator[87818]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:15:33 compute-0 systemd-sysv-generator[87821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:15:33 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:34 compute-0 systemd[1]: Reloading.
Dec 04 10:15:34 compute-0 systemd-rc-local-generator[87859]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:15:34 compute-0 systemd-sysv-generator[87863]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:15:34 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:34 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:34 compute-0 ceph-mon[75358]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:34 compute-0 systemd[1]: Starting Ceph osd.2 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:15:34 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:34 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:34 compute-0 podman[87915]: 2025-12-04 10:15:34.667972571 +0000 UTC m=+0.031542040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:34 compute-0 podman[87915]: 2025-12-04 10:15:34.918507307 +0000 UTC m=+0.282076716 container create fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:15:34 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:35 compute-0 podman[87915]: 2025-12-04 10:15:35.027531428 +0000 UTC m=+0.391100927 container init fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:15:35 compute-0 podman[87915]: 2025-12-04 10:15:35.036311102 +0000 UTC m=+0.399880511 container start fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 04 10:15:35 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:35 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:35 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:35 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:35 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:35 compute-0 bash[87915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:35 compute-0 podman[87915]: 2025-12-04 10:15:35.336918781 +0000 UTC m=+0.700488240 container attach fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:15:35 compute-0 bash[87915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:35 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:35 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:35 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:35 compute-0 ceph-mon[75358]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:35 compute-0 lvm[88017]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:15:35 compute-0 lvm[88016]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:15:35 compute-0 lvm[88017]: VG ceph_vg1 finished
Dec 04 10:15:35 compute-0 lvm[88016]: VG ceph_vg0 finished
Dec 04 10:15:35 compute-0 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 04 10:15:35 compute-0 lvm[88019]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:15:35 compute-0 lvm[88019]: VG ceph_vg2 finished
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:36 compute-0 bash[87915]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 04 10:15:36 compute-0 bash[87915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:36 compute-0 bash[87915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 04 10:15:36 compute-0 bash[87915]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 04 10:15:36 compute-0 bash[87915]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:36 compute-0 bash[87915]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:36 compute-0 bash[87915]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 04 10:15:36 compute-0 bash[87915]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 04 10:15:36 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:36 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 04 10:15:36 compute-0 bash[87915]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 04 10:15:36 compute-0 bash[87915]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 04 10:15:36 compute-0 systemd[1]: libpod-fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf.scope: Deactivated successfully.
Dec 04 10:15:36 compute-0 podman[87915]: 2025-12-04 10:15:36.274757963 +0000 UTC m=+1.638327372 container died fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:15:36 compute-0 systemd[1]: libpod-fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf.scope: Consumed 1.772s CPU time.
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 4.092 iops: 1047.652 elapsed_sec: 2.864
Dec 04 10:15:36 compute-0 ceph-osd[86021]: log_channel(cluster) log [WRN] : OSD bench result of 1047.651829 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 0 waiting for initial osdmap
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0[86017]: 2025-12-04T10:15:36.312+0000 7f8fb551c640 -1 osd.0 0 waiting for initial osdmap
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Dec 04 10:15:36 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec 04 10:15:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:36 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 04 10:15:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0[86017]: 2025-12-04T10:15:36.492+0000 7f8fb0321640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 10 set_numa_affinity not setting numa affinity
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 04 10:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab-merged.mount: Deactivated successfully.
Dec 04 10:15:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:36 compute-0 podman[87915]: 2025-12-04 10:15:36.731195825 +0000 UTC m=+2.094765234 container remove fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 10:15:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 04 10:15:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec 04 10:15:36 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250] boot
Dec 04 10:15:36 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec 04 10:15:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 04 10:15:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:36 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:36 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:36 compute-0 ceph-osd[86021]: osd.0 11 state: booting -> active
Dec 04 10:15:37 compute-0 podman[88185]: 2025-12-04 10:15:37.002929178 +0000 UTC m=+0.059011821 container create 743bc5e794db2e1212d983a5a84a30b8ad953b57b314c50b155b01df81070c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:37 compute-0 podman[88185]: 2025-12-04 10:15:36.972606357 +0000 UTC m=+0.028689030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:37 compute-0 podman[88185]: 2025-12-04 10:15:37.15339742 +0000 UTC m=+0.209480073 container init 743bc5e794db2e1212d983a5a84a30b8ad953b57b314c50b155b01df81070c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:15:37 compute-0 podman[88185]: 2025-12-04 10:15:37.164482771 +0000 UTC m=+0.220565404 container start 743bc5e794db2e1212d983a5a84a30b8ad953b57b314c50b155b01df81070c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:15:37 compute-0 bash[88185]: 743bc5e794db2e1212d983a5a84a30b8ad953b57b314c50b155b01df81070c42
Dec 04 10:15:37 compute-0 systemd[1]: Started Ceph osd.2 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:15:37 compute-0 ceph-osd[88205]: set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: pidfile_write: ignore empty --pid-file
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:37 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 sudo[87554]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 ceph-osd[88205]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec 04 10:15:37 compute-0 ceph-osd[88205]: load: jerasure load: lrc 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 sudo[88225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:37 compute-0 sudo[88225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:37 compute-0 sudo[88225]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 ceph-osd[88205]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 04 10:15:37 compute-0 ceph-osd[88205]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 sudo[88262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 sudo[88262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount shared_bdev_used = 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: RocksDB version: 7.9.2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Git sha 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: DB SUMMARY
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: DB Session ID:  7MY0ZPEWWGRZELY8V8L4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: CURRENT file:  CURRENT
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: IDENTITY file:  IDENTITY
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                         Options.error_if_exists: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.create_if_missing: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                         Options.paranoid_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                                     Options.env: 0x55c0a1bdbea0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                                Options.info_log: 0x55c0a2c488a0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_file_opening_threads: 16
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                              Options.statistics: (nil)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.use_fsync: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.max_log_file_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                         Options.allow_fallocate: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.use_direct_reads: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.create_missing_column_families: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                              Options.db_log_dir: 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                                 Options.wal_dir: db.wal
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.advise_random_on_open: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.write_buffer_manager: 0x55c0a1c40b40
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                            Options.rate_limiter: (nil)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.unordered_write: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.row_cache: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                              Options.wal_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.allow_ingest_behind: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.two_write_queues: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.manual_wal_flush: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.wal_compression: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.atomic_flush: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.log_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.allow_data_in_errors: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.db_host_id: __hostname__
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.max_background_jobs: 4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.max_background_compactions: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.max_subcompactions: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.max_open_files: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.bytes_per_sync: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.max_background_flushes: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Compression algorithms supported:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kZSTD supported: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kXpressCompression supported: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kBZip2Compression supported: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kLZ4Compression supported: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kZlibCompression supported: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kLZ4HCCompression supported: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kSnappyCompression supported: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7bb5ab31-9ba6-46d6-87fa-5957b282c9d1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337609414, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337611523, "job": 1, "event": "recovery_finished"}
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: freelist init
Dec 04 10:15:37 compute-0 ceph-osd[88205]: freelist _read_cfg
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs umount
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) close
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluefs mount shared_bdev_used = 27262976
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: RocksDB version: 7.9.2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Git sha 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: DB SUMMARY
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: DB Session ID:  7MY0ZPEWWGRZELY8V8L5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: CURRENT file:  CURRENT
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: IDENTITY file:  IDENTITY
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                         Options.error_if_exists: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.create_if_missing: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                         Options.paranoid_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                                     Options.env: 0x55c0a1bdbd50
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                                Options.info_log: 0x55c0a2c49b00
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_file_opening_threads: 16
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                              Options.statistics: (nil)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.use_fsync: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.max_log_file_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                         Options.allow_fallocate: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.use_direct_reads: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.create_missing_column_families: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                              Options.db_log_dir: 
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                                 Options.wal_dir: db.wal
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.advise_random_on_open: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.write_buffer_manager: 0x55c0a1c41900
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                            Options.rate_limiter: (nil)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.unordered_write: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.row_cache: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                              Options.wal_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.allow_ingest_behind: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.two_write_queues: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.manual_wal_flush: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.wal_compression: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.atomic_flush: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.log_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.allow_data_in_errors: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.db_host_id: __hostname__
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.max_background_jobs: 4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.max_background_compactions: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.max_subcompactions: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.max_open_files: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.bytes_per_sync: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.max_background_flushes: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Compression algorithms supported:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kZSTD supported: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kXpressCompression supported: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kBZip2Compression supported: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kLZ4Compression supported: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kZlibCompression supported: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kLZ4HCCompression supported: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         kSnappyCompression supported: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0a1bdf4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7bb5ab31-9ba6-46d6-87fa-5957b282c9d1
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337657731, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337662187, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843337, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7bb5ab31-9ba6-46d6-87fa-5957b282c9d1", "db_session_id": "7MY0ZPEWWGRZELY8V8L5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337665482, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843337, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7bb5ab31-9ba6-46d6-87fa-5957b282c9d1", "db_session_id": "7MY0ZPEWWGRZELY8V8L5", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337694386, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843337, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7bb5ab31-9ba6-46d6-87fa-5957b282c9d1", "db_session_id": "7MY0ZPEWWGRZELY8V8L5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337696754, "job": 1, "event": "recovery_finished"}
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c0a2c4bc00
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: DB pointer 0x55c0a2e02000
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec 04 10:15:37 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec 04 10:15:37 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 04 10:15:37 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 04 10:15:37 compute-0 ceph-osd[88205]: _get_class not permitted to load lua
Dec 04 10:15:37 compute-0 ceph-osd[88205]: _get_class not permitted to load sdk
Dec 04 10:15:37 compute-0 ceph-osd[88205]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 04 10:15:37 compute-0 ceph-osd[88205]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 04 10:15:37 compute-0 ceph-osd[88205]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 04 10:15:37 compute-0 ceph-osd[88205]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 04 10:15:37 compute-0 ceph-osd[88205]: osd.2 0 load_pgs
Dec 04 10:15:37 compute-0 ceph-osd[88205]: osd.2 0 load_pgs opened 0 pgs
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:15:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:15:37 compute-0 ceph-osd[88205]: osd.2 0 log_to_monitors true
Dec 04 10:15:37 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2[88201]: 2025-12-04T10:15:37.785+0000 7fd8a9ade8c0 -1 osd.2 0 log_to_monitors true
Dec 04 10:15:37 compute-0 ceph-mon[75358]: OSD bench result of 1047.651829 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 04 10:15:37 compute-0 ceph-mon[75358]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 04 10:15:37 compute-0 ceph-mon[75358]: osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250] boot
Dec 04 10:15:37 compute-0 ceph-mon[75358]: osdmap e11: 3 total, 1 up, 3 in
Dec 04 10:15:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 04 10:15:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Dec 04 10:15:37 compute-0 podman[88721]: 2025-12-04 10:15:37.878323476 +0000 UTC m=+0.061388130 container create a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:37 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:37 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:37 compute-0 ceph-mgr[75651]: [devicehealth INFO root] creating mgr pool
Dec 04 10:15:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 04 10:15:37 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 04 10:15:37 compute-0 podman[88721]: 2025-12-04 10:15:37.845822763 +0000 UTC m=+0.028887407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:37 compute-0 systemd[1]: Started libpod-conmon-a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a.scope.
Dec 04 10:15:38 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:38 compute-0 podman[88721]: 2025-12-04 10:15:38.027957389 +0000 UTC m=+0.211022023 container init a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:38 compute-0 podman[88721]: 2025-12-04 10:15:38.038962458 +0000 UTC m=+0.222027072 container start a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:15:38 compute-0 eager_galois[88738]: 167 167
Dec 04 10:15:38 compute-0 systemd[1]: libpod-a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a.scope: Deactivated successfully.
Dec 04 10:15:38 compute-0 conmon[88738]: conmon a058100114bf0df45397 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a.scope/container/memory.events
Dec 04 10:15:38 compute-0 podman[88721]: 2025-12-04 10:15:38.059902609 +0000 UTC m=+0.242967243 container attach a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 04 10:15:38 compute-0 podman[88721]: 2025-12-04 10:15:38.060574645 +0000 UTC m=+0.243639259 container died a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5416eeab3661cc2661757b52d28c742df388d9d567265da76da7f206f61704f5-merged.mount: Deactivated successfully.
Dec 04 10:15:38 compute-0 podman[88721]: 2025-12-04 10:15:38.188503818 +0000 UTC m=+0.371568452 container remove a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:15:38 compute-0 systemd[1]: libpod-conmon-a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a.scope: Deactivated successfully.
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:38 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:38 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:38 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:38 compute-0 podman[88764]: 2025-12-04 10:15:38.407711848 +0000 UTC m=+0.073263489 container create 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:38 compute-0 podman[88764]: 2025-12-04 10:15:38.361880349 +0000 UTC m=+0.027432080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:38 compute-0 systemd[1]: Started libpod-conmon-860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db.scope.
Dec 04 10:15:38 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:38 compute-0 podman[88764]: 2025-12-04 10:15:38.549684704 +0000 UTC m=+0.215236365 container init 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:15:38 compute-0 podman[88764]: 2025-12-04 10:15:38.556723516 +0000 UTC m=+0.222275147 container start 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:15:38 compute-0 podman[88764]: 2025-12-04 10:15:38.575664318 +0000 UTC m=+0.241215939 container attach 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:15:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 04 10:15:38 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 04 10:15:38 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 04 10:15:38 compute-0 ceph-mon[75358]: from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 04 10:15:38 compute-0 ceph-mon[75358]: from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 04 10:15:38 compute-0 ceph-mon[75358]: osdmap e12: 3 total, 1 up, 3 in
Dec 04 10:15:38 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:38 compute-0 ceph-mon[75358]: from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 04 10:15:38 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:38 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 04 10:15:38 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:38 compute-0 ceph-mon[75358]: pgmap v39: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 04 10:15:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 04 10:15:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 04 10:15:38 compute-0 ceph-osd[88205]: osd.2 0 done with init, starting boot process
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Dec 04 10:15:38 compute-0 ceph-osd[88205]: osd.2 0 start_boot
Dec 04 10:15:38 compute-0 ceph-osd[88205]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 04 10:15:38 compute-0 ceph-osd[88205]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 04 10:15:38 compute-0 ceph-osd[88205]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 04 10:15:38 compute-0 ceph-osd[88205]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 04 10:15:38 compute-0 ceph-osd[88205]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 crush map has features 3314933000852226048, adjusting msgr requires
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Dec 04 10:15:38 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Dec 04 10:15:38 compute-0 ceph-osd[86021]: osd.0 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 04 10:15:38 compute-0 ceph-osd[86021]: osd.0 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 04 10:15:38 compute-0 ceph-osd[86021]: osd.0 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:38 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:38 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:38 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:38 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 04 10:15:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 04 10:15:38 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec 04 10:15:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:38 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:38 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:39 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:39 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:39 compute-0 lvm[88858]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:15:39 compute-0 lvm[88858]: VG ceph_vg0 finished
Dec 04 10:15:39 compute-0 lvm[88859]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:15:39 compute-0 lvm[88859]: VG ceph_vg1 finished
Dec 04 10:15:39 compute-0 lvm[88861]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:15:39 compute-0 lvm[88861]: VG ceph_vg2 finished
Dec 04 10:15:39 compute-0 competent_swartz[88780]: {}
Dec 04 10:15:39 compute-0 systemd[1]: libpod-860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db.scope: Deactivated successfully.
Dec 04 10:15:39 compute-0 podman[88764]: 2025-12-04 10:15:39.501751073 +0000 UTC m=+1.167302724 container died 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:15:39 compute-0 systemd[1]: libpod-860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db.scope: Consumed 1.483s CPU time.
Dec 04 10:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674-merged.mount: Deactivated successfully.
Dec 04 10:15:39 compute-0 podman[88764]: 2025-12-04 10:15:39.742577382 +0000 UTC m=+1.408129013 container remove 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:39 compute-0 systemd[1]: libpod-conmon-860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db.scope: Deactivated successfully.
Dec 04 10:15:39 compute-0 sudo[88262]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 04 10:15:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 04 10:15:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Dec 04 10:15:39 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec 04 10:15:39 compute-0 ceph-mon[75358]: from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 04 10:15:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 04 10:15:39 compute-0 ceph-mon[75358]: osdmap e13: 3 total, 1 up, 3 in
Dec 04 10:15:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 04 10:15:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:39 compute-0 sudo[88877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:15:39 compute-0 sudo[88877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:39 compute-0 sudo[88877]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:39 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Dec 04 10:15:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:39 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:39 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:40 compute-0 sudo[88902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:40 compute-0 sudo[88902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:40 compute-0 sudo[88902]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:40 compute-0 sudo[88927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:15:40 compute-0 sudo[88927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:40 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:40 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:40 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 04 10:15:40 compute-0 podman[88995]: 2025-12-04 10:15:40.7215609 +0000 UTC m=+0.215713387 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:15:40 compute-0 podman[88995]: 2025-12-04 10:15:40.86168297 +0000 UTC m=+0.355835437 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 04 10:15:40 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 7.881 iops: 2017.502 elapsed_sec: 1.487
Dec 04 10:15:40 compute-0 ceph-osd[87071]: log_channel(cluster) log [WRN] : OSD bench result of 2017.501860 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 04 10:15:40 compute-0 ceph-osd[87071]: osd.1 0 waiting for initial osdmap
Dec 04 10:15:40 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1[87067]: 2025-12-04T10:15:40.913+0000 7f1f97355640 -1 osd.1 0 waiting for initial osdmap
Dec 04 10:15:40 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec 04 10:15:40 compute-0 ceph-osd[87071]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 04 10:15:40 compute-0 ceph-osd[87071]: osd.1 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 04 10:15:40 compute-0 ceph-osd[87071]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 04 10:15:40 compute-0 ceph-osd[87071]: osd.1 14 check_osdmap_features require_osd_release unknown -> tentacle
Dec 04 10:15:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:40 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:40 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:40 compute-0 ceph-mon[75358]: purged_snaps scrub starts
Dec 04 10:15:40 compute-0 ceph-mon[75358]: purged_snaps scrub ok
Dec 04 10:15:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 04 10:15:40 compute-0 ceph-mon[75358]: osdmap e14: 3 total, 1 up, 3 in
Dec 04 10:15:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:40 compute-0 ceph-mon[75358]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 04 10:15:40 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1[87067]: 2025-12-04T10:15:40.972+0000 7f1f9215a640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 04 10:15:40 compute-0 ceph-osd[87071]: osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 04 10:15:40 compute-0 ceph-osd[87071]: osd.1 14 set_numa_affinity not setting numa affinity
Dec 04 10:15:40 compute-0 ceph-osd[87071]: osd.1 14 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Dec 04 10:15:41 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec 04 10:15:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:41 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 04 10:15:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 04 10:15:42 compute-0 ceph-osd[87071]: osd.1 14 tick checking mon for new map
Dec 04 10:15:42 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec 04 10:15:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:42 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:42 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Dec 04 10:15:42 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567] boot
Dec 04 10:15:42 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Dec 04 10:15:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 04 10:15:42 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:42 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:42 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:42 compute-0 ceph-osd[87071]: osd.1 15 state: booting -> active
Dec 04 10:15:42 compute-0 ceph-mon[75358]: OSD bench result of 2017.501860 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 04 10:15:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[13,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:15:42 compute-0 sudo[88927]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:42 compute-0 sudo[89140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:42 compute-0 sudo[89140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:42 compute-0 sudo[89140]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:42 compute-0 sudo[89165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- inventory --format=json-pretty --filter-for-batch
Dec 04 10:15:42 compute-0 sudo[89165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:42 compute-0 podman[89202]: 2025-12-04 10:15:42.645072902 +0000 UTC m=+0.073241979 container create e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 04 10:15:42 compute-0 podman[89202]: 2025-12-04 10:15:42.59991738 +0000 UTC m=+0.028086507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:42 compute-0 systemd[1]: Started libpod-conmon-e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353.scope.
Dec 04 10:15:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:42 compute-0 podman[89202]: 2025-12-04 10:15:42.78547312 +0000 UTC m=+0.213642247 container init e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:42 compute-0 podman[89202]: 2025-12-04 10:15:42.793617219 +0000 UTC m=+0.221786296 container start e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:15:42 compute-0 compassionate_bose[89218]: 167 167
Dec 04 10:15:42 compute-0 systemd[1]: libpod-e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353.scope: Deactivated successfully.
Dec 04 10:15:42 compute-0 podman[89202]: 2025-12-04 10:15:42.819183412 +0000 UTC m=+0.247352489 container attach e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:15:42 compute-0 podman[89202]: 2025-12-04 10:15:42.819861019 +0000 UTC m=+0.248030096 container died e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c6e877dded2435a8c33627001d01baed5facb4b72b0d88717d4fcbb32c88b86-merged.mount: Deactivated successfully.
Dec 04 10:15:42 compute-0 podman[89202]: 2025-12-04 10:15:42.93255387 +0000 UTC m=+0.360722937 container remove e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:42 compute-0 systemd[1]: libpod-conmon-e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353.scope: Deactivated successfully.
Dec 04 10:15:42 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec 04 10:15:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:42 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:42 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 04 10:15:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:43 compute-0 ceph-mon[75358]: osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567] boot
Dec 04 10:15:43 compute-0 ceph-mon[75358]: osdmap e15: 3 total, 2 up, 3 in
Dec 04 10:15:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 04 10:15:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:43 compute-0 ceph-mon[75358]: pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 04 10:15:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:43 compute-0 podman[89242]: 2025-12-04 10:15:43.118419667 +0000 UTC m=+0.067233342 container create e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:15:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Dec 04 10:15:43 compute-0 podman[89242]: 2025-12-04 10:15:43.080592114 +0000 UTC m=+0.029405819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:43 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Dec 04 10:15:43 compute-0 systemd[1]: Started libpod-conmon-e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de.scope.
Dec 04 10:15:43 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:43 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:43 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=15/16 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[13,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:43 compute-0 podman[89242]: 2025-12-04 10:15:43.256046636 +0000 UTC m=+0.204860331 container init e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:43 compute-0 podman[89242]: 2025-12-04 10:15:43.264777229 +0000 UTC m=+0.213590894 container start e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:15:43 compute-0 podman[89242]: 2025-12-04 10:15:43.309856679 +0000 UTC m=+0.258670324 container attach e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 10:15:43 compute-0 ceph-mgr[75651]: [devicehealth INFO root] creating main.db for devicehealth
Dec 04 10:15:43 compute-0 ceph-mgr[75651]: [devicehealth INFO root] Check health
Dec 04 10:15:43 compute-0 ceph-mgr[75651]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec 04 10:15:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 04 10:15:43 compute-0 sudo[89279]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 04 10:15:43 compute-0 sudo[89279]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 04 10:15:43 compute-0 sudo[89279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 04 10:15:43 compute-0 sudo[89279]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 04 10:15:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 04 10:15:43 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]: [
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:     {
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         "available": false,
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         "being_replaced": false,
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         "ceph_device_lvm": false,
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         "lsm_data": {},
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         "lvs": [],
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         "path": "/dev/sr0",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         "rejected_reasons": [
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "Insufficient space (<5GB)",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "Has a FileSystem"
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         ],
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         "sys_api": {
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "actuators": null,
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "device_nodes": [
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:                 "sr0"
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             ],
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "devname": "sr0",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "human_readable_size": "482.00 KB",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "id_bus": "ata",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "model": "QEMU DVD-ROM",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "nr_requests": "2",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "parent": "/dev/sr0",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "partitions": {},
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "path": "/dev/sr0",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "removable": "1",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "rev": "2.5+",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "ro": "0",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "rotational": "1",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "sas_address": "",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "sas_device_handle": "",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "scheduler_mode": "mq-deadline",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "sectors": 0,
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "sectorsize": "2048",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "size": 493568.0,
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "support_discard": "2048",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "type": "disk",
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:             "vendor": "QEMU"
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:         }
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]:     }
Dec 04 10:15:43 compute-0 recursing_roentgen[89258]: ]
Dec 04 10:15:43 compute-0 systemd[1]: libpod-e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de.scope: Deactivated successfully.
Dec 04 10:15:43 compute-0 podman[89242]: 2025-12-04 10:15:43.835323317 +0000 UTC m=+0.784136992 container died e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6-merged.mount: Deactivated successfully.
Dec 04 10:15:43 compute-0 podman[89242]: 2025-12-04 10:15:43.931396661 +0000 UTC m=+0.880210346 container remove e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:43 compute-0 systemd[1]: libpod-conmon-e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de.scope: Deactivated successfully.
Dec 04 10:15:43 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec 04 10:15:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:43 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:43 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:43 compute-0 sudo[89165]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mgr[75651]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Dec 04 10:15:44 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 04 10:15:44 compute-0 ceph-mgr[75651]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec 04 10:15:44 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:44 compute-0 sudo[90073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:44 compute-0 sudo[90073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:44 compute-0 sudo[90073]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 04 10:15:44 compute-0 ceph-mon[75358]: osdmap e16: 3 total, 2 up, 3 in
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:44 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:44 compute-0 sudo[90098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:15:44 compute-0 sudo[90098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:44 compute-0 ceph-osd[88205]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.609 iops: 6555.894 elapsed_sec: 0.458
Dec 04 10:15:44 compute-0 ceph-osd[88205]: log_channel(cluster) log [WRN] : OSD bench result of 6555.894056 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 04 10:15:44 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2[88201]: 2025-12-04T10:15:44.249+0000 7fd8a5a60640 -1 osd.2 0 waiting for initial osdmap
Dec 04 10:15:44 compute-0 ceph-osd[88205]: osd.2 0 waiting for initial osdmap
Dec 04 10:15:44 compute-0 ceph-osd[88205]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 04 10:15:44 compute-0 ceph-osd[88205]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 04 10:15:44 compute-0 ceph-osd[88205]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 04 10:15:44 compute-0 ceph-osd[88205]: osd.2 17 check_osdmap_features require_osd_release unknown -> tentacle
Dec 04 10:15:44 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2[88201]: 2025-12-04T10:15:44.277+0000 7fd8a0865640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 04 10:15:44 compute-0 ceph-osd[88205]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 04 10:15:44 compute-0 ceph-osd[88205]: osd.2 17 set_numa_affinity not setting numa affinity
Dec 04 10:15:44 compute-0 ceph-osd[88205]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Dec 04 10:15:44 compute-0 podman[90136]: 2025-12-04 10:15:44.509945353 +0000 UTC m=+0.043120222 container create b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:15:44 compute-0 systemd[1]: Started libpod-conmon-b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773.scope.
Dec 04 10:15:44 compute-0 podman[90136]: 2025-12-04 10:15:44.491247397 +0000 UTC m=+0.024422296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:44 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:44 compute-0 podman[90136]: 2025-12-04 10:15:44.607971676 +0000 UTC m=+0.141146545 container init b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:15:44 compute-0 podman[90136]: 2025-12-04 10:15:44.614995748 +0000 UTC m=+0.148170617 container start b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:15:44 compute-0 podman[90136]: 2025-12-04 10:15:44.619376075 +0000 UTC m=+0.152550944 container attach b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:44 compute-0 kind_booth[90153]: 167 167
Dec 04 10:15:44 compute-0 systemd[1]: libpod-b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773.scope: Deactivated successfully.
Dec 04 10:15:44 compute-0 podman[90136]: 2025-12-04 10:15:44.62163692 +0000 UTC m=+0.154811789 container died b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-840a37dc867c9f8a89c31042f5f392fe92f928aeeb6dfd05cdffc431f56c12f1-merged.mount: Deactivated successfully.
Dec 04 10:15:44 compute-0 podman[90136]: 2025-12-04 10:15:44.663223095 +0000 UTC m=+0.196397964 container remove b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:15:44 compute-0 systemd[1]: libpod-conmon-b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773.scope: Deactivated successfully.
Dec 04 10:15:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 04 10:15:44 compute-0 podman[90178]: 2025-12-04 10:15:44.862380867 +0000 UTC m=+0.043697798 container create f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 04 10:15:44 compute-0 systemd[1]: Started libpod-conmon-f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84.scope.
Dec 04 10:15:44 compute-0 podman[90178]: 2025-12-04 10:15:44.840869012 +0000 UTC m=+0.022185943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:44 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:44 compute-0 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:44 compute-0 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 04 10:15:44 compute-0 podman[90178]: 2025-12-04 10:15:44.964486879 +0000 UTC m=+0.145803810 container init f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:15:44 compute-0 podman[90178]: 2025-12-04 10:15:44.97271618 +0000 UTC m=+0.154033081 container start f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:15:44 compute-0 podman[90178]: 2025-12-04 10:15:44.976695287 +0000 UTC m=+0.158012198 container attach f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.iwufnj(active, since 79s)
Dec 04 10:15:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 04 10:15:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec 04 10:15:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487] boot
Dec 04 10:15:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec 04 10:15:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 04 10:15:45 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:45 compute-0 ceph-osd[88205]: osd.2 18 state: booting -> active
Dec 04 10:15:45 compute-0 ceph-mon[75358]: Adjusting osd_memory_target on compute-0 to 43690k
Dec 04 10:15:45 compute-0 ceph-mon[75358]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec 04 10:15:45 compute-0 ceph-mon[75358]: osdmap e17: 3 total, 2 up, 3 in
Dec 04 10:15:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:45 compute-0 ceph-mon[75358]: OSD bench result of 6555.894056 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 04 10:15:45 compute-0 ceph-mon[75358]: pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 04 10:15:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:45 compute-0 ceph-mon[75358]: mgrmap e9: compute-0.iwufnj(active, since 79s)
Dec 04 10:15:45 compute-0 recursing_jackson[90194]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:15:45 compute-0 recursing_jackson[90194]: --> All data devices are unavailable
Dec 04 10:15:45 compute-0 systemd[1]: libpod-f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84.scope: Deactivated successfully.
Dec 04 10:15:45 compute-0 podman[90178]: 2025-12-04 10:15:45.539448634 +0000 UTC m=+0.720765575 container died f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b-merged.mount: Deactivated successfully.
Dec 04 10:15:45 compute-0 podman[90178]: 2025-12-04 10:15:45.592481669 +0000 UTC m=+0.773798570 container remove f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:45 compute-0 systemd[1]: libpod-conmon-f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84.scope: Deactivated successfully.
Dec 04 10:15:45 compute-0 sudo[90098]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:45 compute-0 sudo[90226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:45 compute-0 sudo[90226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:45 compute-0 sudo[90226]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:45 compute-0 sudo[90251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:15:45 compute-0 sudo[90251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:46 compute-0 podman[90287]: 2025-12-04 10:15:46.109400107 +0000 UTC m=+0.065617213 container create 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:46 compute-0 systemd[1]: Started libpod-conmon-69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015.scope.
Dec 04 10:15:46 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:46 compute-0 podman[90287]: 2025-12-04 10:15:46.083268288 +0000 UTC m=+0.039485474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:46 compute-0 podman[90287]: 2025-12-04 10:15:46.184718705 +0000 UTC m=+0.140935811 container init 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:15:46 compute-0 podman[90287]: 2025-12-04 10:15:46.194441242 +0000 UTC m=+0.150658338 container start 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:46 compute-0 podman[90287]: 2025-12-04 10:15:46.198716767 +0000 UTC m=+0.154933913 container attach 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:46 compute-0 reverent_hopper[90303]: 167 167
Dec 04 10:15:46 compute-0 systemd[1]: libpod-69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015.scope: Deactivated successfully.
Dec 04 10:15:46 compute-0 conmon[90303]: conmon 69e5b79e26e02874a9e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015.scope/container/memory.events
Dec 04 10:15:46 compute-0 podman[90287]: 2025-12-04 10:15:46.20214108 +0000 UTC m=+0.158358176 container died 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:15:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 04 10:15:46 compute-0 ceph-mon[75358]: osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487] boot
Dec 04 10:15:46 compute-0 ceph-mon[75358]: osdmap e18: 3 total, 3 up, 3 in
Dec 04 10:15:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 04 10:15:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec 04 10:15:46 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec 04 10:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a15d6ad0bfe531237d0f779665933e4439e0663ce48915e67d067665196afd3-merged.mount: Deactivated successfully.
Dec 04 10:15:46 compute-0 podman[90287]: 2025-12-04 10:15:46.249501616 +0000 UTC m=+0.205718712 container remove 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:15:46 compute-0 systemd[1]: libpod-conmon-69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015.scope: Deactivated successfully.
Dec 04 10:15:46 compute-0 podman[90326]: 2025-12-04 10:15:46.413738605 +0000 UTC m=+0.048683709 container create 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:46 compute-0 systemd[1]: Started libpod-conmon-12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a.scope.
Dec 04 10:15:46 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:46 compute-0 podman[90326]: 2025-12-04 10:15:46.39265218 +0000 UTC m=+0.027597304 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:46 compute-0 podman[90326]: 2025-12-04 10:15:46.49339596 +0000 UTC m=+0.128341074 container init 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:15:46 compute-0 podman[90326]: 2025-12-04 10:15:46.500663317 +0000 UTC m=+0.135608421 container start 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:15:46 compute-0 podman[90326]: 2025-12-04 10:15:46.523642898 +0000 UTC m=+0.158588002 container attach 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:15:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 1.2 GiB used, 59 GiB / 60 GiB avail
Dec 04 10:15:46 compute-0 vibrant_curie[90342]: {
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:     "0": [
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:         {
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "devices": [
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "/dev/loop3"
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             ],
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_name": "ceph_lv0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_size": "21470642176",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "name": "ceph_lv0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "tags": {
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.cluster_name": "ceph",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.crush_device_class": "",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.encrypted": "0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.objectstore": "bluestore",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.osd_id": "0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.type": "block",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.vdo": "0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.with_tpm": "0"
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             },
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "type": "block",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "vg_name": "ceph_vg0"
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:         }
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:     ],
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:     "1": [
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:         {
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "devices": [
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "/dev/loop4"
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             ],
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_name": "ceph_lv1",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_size": "21470642176",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "name": "ceph_lv1",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "tags": {
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.cluster_name": "ceph",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.crush_device_class": "",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.encrypted": "0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.objectstore": "bluestore",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.osd_id": "1",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.type": "block",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.vdo": "0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.with_tpm": "0"
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             },
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "type": "block",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "vg_name": "ceph_vg1"
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:         }
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:     ],
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:     "2": [
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:         {
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "devices": [
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "/dev/loop5"
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             ],
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_name": "ceph_lv2",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_size": "21470642176",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "name": "ceph_lv2",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "tags": {
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.cluster_name": "ceph",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.crush_device_class": "",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.encrypted": "0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.objectstore": "bluestore",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.osd_id": "2",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.type": "block",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.vdo": "0",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:                 "ceph.with_tpm": "0"
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             },
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "type": "block",
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:             "vg_name": "ceph_vg2"
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:         }
Dec 04 10:15:46 compute-0 vibrant_curie[90342]:     ]
Dec 04 10:15:46 compute-0 vibrant_curie[90342]: }
Dec 04 10:15:46 compute-0 systemd[1]: libpod-12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a.scope: Deactivated successfully.
Dec 04 10:15:46 compute-0 podman[90326]: 2025-12-04 10:15:46.859068116 +0000 UTC m=+0.494013220 container died 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32-merged.mount: Deactivated successfully.
Dec 04 10:15:46 compute-0 podman[90326]: 2025-12-04 10:15:46.906647907 +0000 UTC m=+0.541593031 container remove 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec 04 10:15:46 compute-0 systemd[1]: libpod-conmon-12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a.scope: Deactivated successfully.
Dec 04 10:15:46 compute-0 sudo[90251]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:47 compute-0 sudo[90362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:47 compute-0 sudo[90362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:47 compute-0 sudo[90362]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:47 compute-0 sudo[90387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:15:47 compute-0 sudo[90387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:47 compute-0 ceph-mon[75358]: osdmap e19: 3 total, 3 up, 3 in
Dec 04 10:15:47 compute-0 ceph-mon[75358]: pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 1.2 GiB used, 59 GiB / 60 GiB avail
Dec 04 10:15:47 compute-0 podman[90425]: 2025-12-04 10:15:47.427066891 +0000 UTC m=+0.048484455 container create 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:15:47 compute-0 systemd[1]: Started libpod-conmon-282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1.scope.
Dec 04 10:15:47 compute-0 podman[90425]: 2025-12-04 10:15:47.405584097 +0000 UTC m=+0.027001681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:47 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:47 compute-0 podman[90425]: 2025-12-04 10:15:47.519586069 +0000 UTC m=+0.141003683 container init 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:15:47 compute-0 podman[90425]: 2025-12-04 10:15:47.528950998 +0000 UTC m=+0.150368572 container start 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 04 10:15:47 compute-0 podman[90425]: 2025-12-04 10:15:47.53353163 +0000 UTC m=+0.154949244 container attach 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 04 10:15:47 compute-0 trusting_stonebraker[90441]: 167 167
Dec 04 10:15:47 compute-0 systemd[1]: libpod-282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1.scope: Deactivated successfully.
Dec 04 10:15:47 compute-0 podman[90425]: 2025-12-04 10:15:47.537868775 +0000 UTC m=+0.159286339 container died 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0fd7d232b1ebfaf3d2ddfa5d7ebb9ceeadb5c7856f9876fd40902c4d654ff4c-merged.mount: Deactivated successfully.
Dec 04 10:15:47 compute-0 podman[90425]: 2025-12-04 10:15:47.578709592 +0000 UTC m=+0.200127156 container remove 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:47 compute-0 systemd[1]: libpod-conmon-282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1.scope: Deactivated successfully.
Dec 04 10:15:47 compute-0 podman[90464]: 2025-12-04 10:15:47.738552094 +0000 UTC m=+0.044143748 container create 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:15:47 compute-0 systemd[1]: Started libpod-conmon-0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136.scope.
Dec 04 10:15:47 compute-0 podman[90464]: 2025-12-04 10:15:47.716913926 +0000 UTC m=+0.022505600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:47 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:47 compute-0 podman[90464]: 2025-12-04 10:15:47.846421367 +0000 UTC m=+0.152013041 container init 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:47 compute-0 podman[90464]: 2025-12-04 10:15:47.853067889 +0000 UTC m=+0.158659543 container start 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:47 compute-0 podman[90464]: 2025-12-04 10:15:47.857462546 +0000 UTC m=+0.163054230 container attach 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:15:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:48 compute-0 lvm[90559]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:15:48 compute-0 lvm[90558]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:15:48 compute-0 lvm[90558]: VG ceph_vg0 finished
Dec 04 10:15:48 compute-0 lvm[90559]: VG ceph_vg1 finished
Dec 04 10:15:48 compute-0 lvm[90561]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:15:48 compute-0 lvm[90561]: VG ceph_vg2 finished
Dec 04 10:15:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 04 10:15:48 compute-0 gifted_napier[90480]: {}
Dec 04 10:15:48 compute-0 systemd[1]: libpod-0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136.scope: Deactivated successfully.
Dec 04 10:15:48 compute-0 podman[90464]: 2025-12-04 10:15:48.749587613 +0000 UTC m=+1.055179267 container died 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:48 compute-0 systemd[1]: libpod-0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136.scope: Consumed 1.448s CPU time.
Dec 04 10:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0-merged.mount: Deactivated successfully.
Dec 04 10:15:48 compute-0 podman[90464]: 2025-12-04 10:15:48.80149521 +0000 UTC m=+1.107086864 container remove 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:48 compute-0 systemd[1]: libpod-conmon-0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136.scope: Deactivated successfully.
Dec 04 10:15:48 compute-0 sudo[90387]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:48 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:48 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:48 compute-0 sudo[90575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:15:48 compute-0 sudo[90575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:48 compute-0 sudo[90575]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:49 compute-0 sudo[90600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:49 compute-0 sudo[90600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:49 compute-0 sudo[90600]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:49 compute-0 sudo[90625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:15:49 compute-0 sudo[90625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:49 compute-0 podman[90694]: 2025-12-04 10:15:49.564147456 +0000 UTC m=+0.074517550 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:49 compute-0 podman[90694]: 2025-12-04 10:15:49.676725815 +0000 UTC m=+0.187095848 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:15:49 compute-0 ceph-mon[75358]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 04 10:15:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 04 10:15:50 compute-0 sudo[90625]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:50 compute-0 sudo[90846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:50 compute-0 sudo[90846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:50 compute-0 sudo[90846]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:50 compute-0 sudo[90871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:15:50 compute-0 sudo[90871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:51 compute-0 sudo[90871]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:15:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:15:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:15:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:15:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:15:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:15:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:15:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:15:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:15:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:51 compute-0 sudo[90927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:51 compute-0 sudo[90927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:51 compute-0 sudo[90927]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:51 compute-0 sudo[90952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:15:51 compute-0 sudo[90952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:51 compute-0 ceph-mon[75358]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 04 10:15:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:15:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:15:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:15:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:15:52 compute-0 podman[90989]: 2025-12-04 10:15:51.95755503 +0000 UTC m=+0.025239317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:52 compute-0 podman[90989]: 2025-12-04 10:15:52.080267354 +0000 UTC m=+0.147951641 container create d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 04 10:15:52 compute-0 systemd[1]: Started libpod-conmon-d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2.scope.
Dec 04 10:15:52 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:52 compute-0 podman[90989]: 2025-12-04 10:15:52.268000107 +0000 UTC m=+0.335684414 container init d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:15:52 compute-0 podman[90989]: 2025-12-04 10:15:52.276109325 +0000 UTC m=+0.343793612 container start d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:52 compute-0 vigilant_knuth[91005]: 167 167
Dec 04 10:15:52 compute-0 systemd[1]: libpod-d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2.scope: Deactivated successfully.
Dec 04 10:15:52 compute-0 podman[90989]: 2025-12-04 10:15:52.292323571 +0000 UTC m=+0.360007948 container attach d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:15:52 compute-0 podman[90989]: 2025-12-04 10:15:52.292901805 +0000 UTC m=+0.360586092 container died d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-668011341d4c2a44eae23abf5ae6eeeb8f8f56f4752f56304a2c89a0e992ee75-merged.mount: Deactivated successfully.
Dec 04 10:15:52 compute-0 podman[90989]: 2025-12-04 10:15:52.332687917 +0000 UTC m=+0.400372204 container remove d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:52 compute-0 systemd[1]: libpod-conmon-d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2.scope: Deactivated successfully.
Dec 04 10:15:52 compute-0 podman[91030]: 2025-12-04 10:15:52.496184977 +0000 UTC m=+0.051764445 container create 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:15:52 compute-0 systemd[1]: Started libpod-conmon-8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca.scope.
Dec 04 10:15:52 compute-0 podman[91030]: 2025-12-04 10:15:52.471909465 +0000 UTC m=+0.027488973 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:52 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:52 compute-0 podman[91030]: 2025-12-04 10:15:52.610571269 +0000 UTC m=+0.166150787 container init 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:15:52 compute-0 podman[91030]: 2025-12-04 10:15:52.618977244 +0000 UTC m=+0.174556732 container start 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:15:52 compute-0 podman[91030]: 2025-12-04 10:15:52.62250443 +0000 UTC m=+0.178083898 container attach 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:15:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:15:53 compute-0 youthful_heisenberg[91046]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:15:53 compute-0 youthful_heisenberg[91046]: --> All data devices are unavailable
Dec 04 10:15:53 compute-0 systemd[1]: libpod-8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca.scope: Deactivated successfully.
Dec 04 10:15:53 compute-0 podman[91030]: 2025-12-04 10:15:53.184078449 +0000 UTC m=+0.739657937 container died 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 04 10:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc-merged.mount: Deactivated successfully.
Dec 04 10:15:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:53 compute-0 podman[91030]: 2025-12-04 10:15:53.694684772 +0000 UTC m=+1.250264280 container remove 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:15:53 compute-0 sudo[90952]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:53 compute-0 systemd[1]: libpod-conmon-8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca.scope: Deactivated successfully.
Dec 04 10:15:53 compute-0 ceph-mon[75358]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:15:53 compute-0 sudo[91076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:53 compute-0 sudo[91076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:53 compute-0 sudo[91076]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:53 compute-0 sudo[91101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:15:53 compute-0 sudo[91101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:54 compute-0 podman[91138]: 2025-12-04 10:15:54.260671757 +0000 UTC m=+0.063223364 container create faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:15:54 compute-0 systemd[1]: Started libpod-conmon-faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c.scope.
Dec 04 10:15:54 compute-0 podman[91138]: 2025-12-04 10:15:54.219974554 +0000 UTC m=+0.022526181 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:54 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:54 compute-0 podman[91138]: 2025-12-04 10:15:54.348601194 +0000 UTC m=+0.151152821 container init faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:15:54 compute-0 podman[91138]: 2025-12-04 10:15:54.354629212 +0000 UTC m=+0.157180809 container start faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:54 compute-0 inspiring_mirzakhani[91154]: 167 167
Dec 04 10:15:54 compute-0 podman[91138]: 2025-12-04 10:15:54.358954317 +0000 UTC m=+0.161506014 container attach faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:15:54 compute-0 systemd[1]: libpod-faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c.scope: Deactivated successfully.
Dec 04 10:15:54 compute-0 podman[91138]: 2025-12-04 10:15:54.360195368 +0000 UTC m=+0.162746965 container died faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 04 10:15:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8f3d90943f70c6ebce3ac0c2a40b23c174f1f0d93ab7c9f97b1a7e093b18060-merged.mount: Deactivated successfully.
Dec 04 10:15:54 compute-0 podman[91138]: 2025-12-04 10:15:54.400257575 +0000 UTC m=+0.202809202 container remove faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:54 compute-0 systemd[1]: libpod-conmon-faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c.scope: Deactivated successfully.
Dec 04 10:15:54 compute-0 podman[91177]: 2025-12-04 10:15:54.606253583 +0000 UTC m=+0.057766951 container create 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:15:54 compute-0 systemd[1]: Started libpod-conmon-12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc.scope.
Dec 04 10:15:54 compute-0 podman[91177]: 2025-12-04 10:15:54.580417373 +0000 UTC m=+0.031930781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:15:54 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:54 compute-0 podman[91177]: 2025-12-04 10:15:54.709599876 +0000 UTC m=+0.161113244 container init 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:54 compute-0 podman[91177]: 2025-12-04 10:15:54.728380465 +0000 UTC m=+0.179893823 container start 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:15:54 compute-0 podman[91177]: 2025-12-04 10:15:54.733250013 +0000 UTC m=+0.184763381 container attach 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]: {
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:     "0": [
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:         {
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "devices": [
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "/dev/loop3"
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             ],
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_name": "ceph_lv0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_size": "21470642176",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "name": "ceph_lv0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "tags": {
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.cluster_name": "ceph",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.crush_device_class": "",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.encrypted": "0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.objectstore": "bluestore",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.osd_id": "0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.type": "block",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.vdo": "0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.with_tpm": "0"
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             },
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "type": "block",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "vg_name": "ceph_vg0"
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:         }
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:     ],
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:     "1": [
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:         {
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "devices": [
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "/dev/loop4"
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             ],
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_name": "ceph_lv1",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_size": "21470642176",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "name": "ceph_lv1",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "tags": {
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.cluster_name": "ceph",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.crush_device_class": "",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.encrypted": "0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.objectstore": "bluestore",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.osd_id": "1",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.type": "block",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.vdo": "0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.with_tpm": "0"
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             },
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "type": "block",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "vg_name": "ceph_vg1"
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:         }
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:     ],
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:     "2": [
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:         {
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "devices": [
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "/dev/loop5"
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             ],
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_name": "ceph_lv2",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_size": "21470642176",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "name": "ceph_lv2",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "tags": {
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.cluster_name": "ceph",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.crush_device_class": "",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.encrypted": "0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.objectstore": "bluestore",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.osd_id": "2",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.type": "block",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.vdo": "0",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:                 "ceph.with_tpm": "0"
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             },
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "type": "block",
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:             "vg_name": "ceph_vg2"
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:         }
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]:     ]
Dec 04 10:15:55 compute-0 upbeat_pasteur[91195]: }
Dec 04 10:15:55 compute-0 systemd[1]: libpod-12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc.scope: Deactivated successfully.
Dec 04 10:15:55 compute-0 podman[91177]: 2025-12-04 10:15:55.045658529 +0000 UTC m=+0.497171967 container died 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5-merged.mount: Deactivated successfully.
Dec 04 10:15:55 compute-0 podman[91177]: 2025-12-04 10:15:55.102395265 +0000 UTC m=+0.553908653 container remove 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:55 compute-0 systemd[1]: libpod-conmon-12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc.scope: Deactivated successfully.
Dec 04 10:15:55 compute-0 sudo[91101]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:55 compute-0 sudo[91215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:15:55 compute-0 sudo[91215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:55 compute-0 sudo[91215]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:55 compute-0 sudo[91240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:15:55 compute-0 sudo[91240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:55 compute-0 podman[91277]: 2025-12-04 10:15:55.635179389 +0000 UTC m=+0.048424232 container create ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:15:55 compute-0 systemd[1]: Started libpod-conmon-ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8.scope.
Dec 04 10:15:55 compute-0 podman[91277]: 2025-12-04 10:15:55.612810303 +0000 UTC m=+0.026055136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:55 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:55 compute-0 podman[91277]: 2025-12-04 10:15:55.730328662 +0000 UTC m=+0.143573555 container init ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:15:55 compute-0 podman[91277]: 2025-12-04 10:15:55.740378437 +0000 UTC m=+0.153623260 container start ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:15:55 compute-0 podman[91277]: 2025-12-04 10:15:55.745028741 +0000 UTC m=+0.158273644 container attach ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 04 10:15:55 compute-0 upbeat_nash[91293]: 167 167
Dec 04 10:15:55 compute-0 systemd[1]: libpod-ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8.scope: Deactivated successfully.
Dec 04 10:15:55 compute-0 podman[91277]: 2025-12-04 10:15:55.74907166 +0000 UTC m=+0.162316483 container died ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-67a5e3c467735141c6ac7fc836ee6ce1701c906d70e15bcad126ae5740eb91a2-merged.mount: Deactivated successfully.
Dec 04 10:15:55 compute-0 podman[91277]: 2025-12-04 10:15:55.791521036 +0000 UTC m=+0.204765849 container remove ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:15:55 compute-0 systemd[1]: libpod-conmon-ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8.scope: Deactivated successfully.
Dec 04 10:15:55 compute-0 ceph-mon[75358]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:15:55 compute-0 podman[91318]: 2025-12-04 10:15:55.986777402 +0000 UTC m=+0.050063753 container create 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:15:56 compute-0 systemd[1]: Started libpod-conmon-64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d.scope.
Dec 04 10:15:56 compute-0 podman[91318]: 2025-12-04 10:15:55.964240952 +0000 UTC m=+0.027527303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:15:56 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:15:56 compute-0 podman[91318]: 2025-12-04 10:15:56.075829786 +0000 UTC m=+0.139116127 container init 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:15:56 compute-0 podman[91318]: 2025-12-04 10:15:56.082929709 +0000 UTC m=+0.146216030 container start 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:15:56 compute-0 podman[91318]: 2025-12-04 10:15:56.087256425 +0000 UTC m=+0.150542746 container attach 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:15:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:15:56 compute-0 lvm[91412]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:15:56 compute-0 lvm[91412]: VG ceph_vg0 finished
Dec 04 10:15:56 compute-0 lvm[91414]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:15:56 compute-0 lvm[91414]: VG ceph_vg1 finished
Dec 04 10:15:56 compute-0 lvm[91416]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:15:56 compute-0 lvm[91416]: VG ceph_vg2 finished
Dec 04 10:15:56 compute-0 goofy_lehmann[91335]: {}
Dec 04 10:15:56 compute-0 systemd[1]: libpod-64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d.scope: Deactivated successfully.
Dec 04 10:15:56 compute-0 systemd[1]: libpod-64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d.scope: Consumed 1.410s CPU time.
Dec 04 10:15:56 compute-0 podman[91318]: 2025-12-04 10:15:56.934217329 +0000 UTC m=+0.997503660 container died 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7-merged.mount: Deactivated successfully.
Dec 04 10:15:56 compute-0 podman[91318]: 2025-12-04 10:15:56.981598856 +0000 UTC m=+1.044885197 container remove 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:15:56 compute-0 systemd[1]: libpod-conmon-64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d.scope: Deactivated successfully.
Dec 04 10:15:57 compute-0 sudo[91240]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:15:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:15:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:57 compute-0 sudo[91429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:15:57 compute-0 sudo[91429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:15:57 compute-0 sudo[91429]: pam_unix(sudo:session): session closed for user root
Dec 04 10:15:57 compute-0 ceph-mon[75358]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:15:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:15:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:15:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:15:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:15:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:15:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:15:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:15:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:15:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:15:59 compute-0 ceph-mon[75358]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:00 compute-0 sshd-session[91454]: Invalid user syncthing from 217.154.62.22 port 51834
Dec 04 10:16:00 compute-0 sshd-session[91454]: Received disconnect from 217.154.62.22 port 51834:11: Bye Bye [preauth]
Dec 04 10:16:00 compute-0 sshd-session[91454]: Disconnected from invalid user syncthing 217.154.62.22 port 51834 [preauth]
Dec 04 10:16:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:01 compute-0 sudo[91479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sanvkmnqwzpbfrdodpqskjoczuyrwykp ; /usr/bin/python3'
Dec 04 10:16:01 compute-0 sudo[91479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:01 compute-0 python3[91481]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:01 compute-0 podman[91483]: 2025-12-04 10:16:01.669784904 +0000 UTC m=+0.066567096 container create 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:01 compute-0 systemd[1]: Started libpod-conmon-5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710.scope.
Dec 04 10:16:01 compute-0 podman[91483]: 2025-12-04 10:16:01.647465459 +0000 UTC m=+0.044247651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:01 compute-0 ceph-mon[75358]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:01 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4c54cc9d984da6226e5c944c3d2966b4f9c6d88e9a3d0be98d5003c357ef6e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4c54cc9d984da6226e5c944c3d2966b4f9c6d88e9a3d0be98d5003c357ef6e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4c54cc9d984da6226e5c944c3d2966b4f9c6d88e9a3d0be98d5003c357ef6e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:01 compute-0 podman[91483]: 2025-12-04 10:16:01.77859199 +0000 UTC m=+0.175374192 container init 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:16:01 compute-0 podman[91483]: 2025-12-04 10:16:01.78720748 +0000 UTC m=+0.183989652 container start 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:01 compute-0 podman[91483]: 2025-12-04 10:16:01.791007013 +0000 UTC m=+0.187789215 container attach 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 04 10:16:01 compute-0 anacron[30888]: Job `cron.daily' started
Dec 04 10:16:01 compute-0 anacron[30888]: Job `cron.daily' terminated
Dec 04 10:16:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 04 10:16:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115381984' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:16:02 compute-0 suspicious_bose[91500]: 
Dec 04 10:16:02 compute-0 suspicious_bose[91500]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":115,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":19,"num_osds":3,"num_up_osds":3,"osd_up_since":1764843345,"num_in_osds":3,"osd_in_since":1764843314,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502874112,"bytes_avail":63909052416,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2025-12-04T10:14:03:532003+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-04T10:15:28.674444+0000","services":{}},"progress_events":{}}
Dec 04 10:16:02 compute-0 systemd[1]: libpod-5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710.scope: Deactivated successfully.
Dec 04 10:16:02 compute-0 podman[91483]: 2025-12-04 10:16:02.353053372 +0000 UTC m=+0.749835574 container died 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d4c54cc9d984da6226e5c944c3d2966b4f9c6d88e9a3d0be98d5003c357ef6e-merged.mount: Deactivated successfully.
Dec 04 10:16:02 compute-0 podman[91483]: 2025-12-04 10:16:02.399342082 +0000 UTC m=+0.796124244 container remove 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:16:02 compute-0 systemd[1]: libpod-conmon-5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710.scope: Deactivated successfully.
Dec 04 10:16:02 compute-0 sudo[91479]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:02 compute-0 sudo[91561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gitupvlvnqeoczuyykayokfrzxzrjgtw ; /usr/bin/python3'
Dec 04 10:16:02 compute-0 sudo[91561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:02 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1115381984' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:16:02 compute-0 python3[91563]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:02 compute-0 podman[91564]: 2025-12-04 10:16:02.986072064 +0000 UTC m=+0.046619269 container create 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:16:03 compute-0 systemd[1]: Started libpod-conmon-963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9.scope.
Dec 04 10:16:03 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ab6466cb1f16c6cacd1655ba3a44442d5619aafd81006ddb7a402549104efa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ab6466cb1f16c6cacd1655ba3a44442d5619aafd81006ddb7a402549104efa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:03 compute-0 podman[91564]: 2025-12-04 10:16:03.058510542 +0000 UTC m=+0.119057767 container init 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:03 compute-0 podman[91564]: 2025-12-04 10:16:02.968335412 +0000 UTC m=+0.028882647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:03 compute-0 podman[91564]: 2025-12-04 10:16:03.06700623 +0000 UTC m=+0.127553435 container start 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:16:03 compute-0 podman[91564]: 2025-12-04 10:16:03.071371386 +0000 UTC m=+0.131918591 container attach 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 04 10:16:03 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204202594' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 04 10:16:03 compute-0 ceph-mon[75358]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1204202594' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:03 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204202594' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec 04 10:16:03 compute-0 optimistic_zhukovsky[91580]: pool 'vms' created
Dec 04 10:16:03 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec 04 10:16:03 compute-0 systemd[1]: libpod-963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9.scope: Deactivated successfully.
Dec 04 10:16:03 compute-0 podman[91564]: 2025-12-04 10:16:03.850661119 +0000 UTC m=+0.911208344 container died 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 04 10:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-81ab6466cb1f16c6cacd1655ba3a44442d5619aafd81006ddb7a402549104efa-merged.mount: Deactivated successfully.
Dec 04 10:16:03 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:03 compute-0 podman[91564]: 2025-12-04 10:16:03.896983769 +0000 UTC m=+0.957530974 container remove 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:03 compute-0 systemd[1]: libpod-conmon-963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9.scope: Deactivated successfully.
Dec 04 10:16:03 compute-0 sudo[91561]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:04 compute-0 sudo[91643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzkjdzsxsbrnjneqxukmtpueehkiwjwv ; /usr/bin/python3'
Dec 04 10:16:04 compute-0 sudo[91643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:04 compute-0 python3[91645]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:04 compute-0 podman[91646]: 2025-12-04 10:16:04.258695709 +0000 UTC m=+0.054741278 container create 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:04 compute-0 systemd[1]: Started libpod-conmon-8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61.scope.
Dec 04 10:16:04 compute-0 podman[91646]: 2025-12-04 10:16:04.230565032 +0000 UTC m=+0.026610681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:04 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f153db15e4e169599e0a5911d0cd53a29b42517a395f400b72d0848091e158ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f153db15e4e169599e0a5911d0cd53a29b42517a395f400b72d0848091e158ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:04 compute-0 podman[91646]: 2025-12-04 10:16:04.365396383 +0000 UTC m=+0.161441972 container init 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Dec 04 10:16:04 compute-0 podman[91646]: 2025-12-04 10:16:04.375346307 +0000 UTC m=+0.171391916 container start 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:16:04 compute-0 podman[91646]: 2025-12-04 10:16:04.380294797 +0000 UTC m=+0.176340386 container attach 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v60: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 04 10:16:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1471047535' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 04 10:16:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1204202594' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:04 compute-0 ceph-mon[75358]: osdmap e20: 3 total, 3 up, 3 in
Dec 04 10:16:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1471047535' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1471047535' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec 04 10:16:04 compute-0 keen_euler[91662]: pool 'volumes' created
Dec 04 10:16:04 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec 04 10:16:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:04 compute-0 systemd[1]: libpod-8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61.scope: Deactivated successfully.
Dec 04 10:16:04 compute-0 podman[91646]: 2025-12-04 10:16:04.863876391 +0000 UTC m=+0.659921960 container died 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f153db15e4e169599e0a5911d0cd53a29b42517a395f400b72d0848091e158ec-merged.mount: Deactivated successfully.
Dec 04 10:16:04 compute-0 podman[91646]: 2025-12-04 10:16:04.908604673 +0000 UTC m=+0.704650252 container remove 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:04 compute-0 systemd[1]: libpod-conmon-8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61.scope: Deactivated successfully.
Dec 04 10:16:04 compute-0 sudo[91643]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:05 compute-0 sudo[91726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeuxfzbhvnzfuxwpdonvotsbvghaahit ; /usr/bin/python3'
Dec 04 10:16:05 compute-0 sudo[91726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:05 compute-0 python3[91728]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:05 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:05 compute-0 podman[91729]: 2025-12-04 10:16:05.265048814 +0000 UTC m=+0.047120012 container create 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:05 compute-0 systemd[1]: Started libpod-conmon-101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253.scope.
Dec 04 10:16:05 compute-0 podman[91729]: 2025-12-04 10:16:05.245700641 +0000 UTC m=+0.027771869 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:05 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c58ad66bfb5cd08fbceb40e5befa7ed534111070455a92948c025533606deac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c58ad66bfb5cd08fbceb40e5befa7ed534111070455a92948c025533606deac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:05 compute-0 podman[91729]: 2025-12-04 10:16:05.373694346 +0000 UTC m=+0.155765614 container init 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 04 10:16:05 compute-0 podman[91729]: 2025-12-04 10:16:05.381960107 +0000 UTC m=+0.164031295 container start 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:05 compute-0 podman[91729]: 2025-12-04 10:16:05.385491364 +0000 UTC m=+0.167562572 container attach 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:05 compute-0 sshd-session[91666]: Invalid user kiosk from 103.149.86.230 port 56450
Dec 04 10:16:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 04 10:16:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1281447236' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 04 10:16:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1281447236' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec 04 10:16:05 compute-0 quirky_fermat[91744]: pool 'backups' created
Dec 04 10:16:05 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec 04 10:16:05 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:05 compute-0 ceph-mon[75358]: pgmap v60: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:05 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1471047535' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:05 compute-0 ceph-mon[75358]: osdmap e21: 3 total, 3 up, 3 in
Dec 04 10:16:05 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1281447236' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:05 compute-0 systemd[1]: libpod-101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253.scope: Deactivated successfully.
Dec 04 10:16:05 compute-0 podman[91729]: 2025-12-04 10:16:05.873747532 +0000 UTC m=+0.655818720 container died 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c58ad66bfb5cd08fbceb40e5befa7ed534111070455a92948c025533606deac-merged.mount: Deactivated successfully.
Dec 04 10:16:05 compute-0 podman[91729]: 2025-12-04 10:16:05.912662673 +0000 UTC m=+0.694733851 container remove 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:16:05 compute-0 systemd[1]: libpod-conmon-101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253.scope: Deactivated successfully.
Dec 04 10:16:05 compute-0 sudo[91726]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:06 compute-0 sshd-session[91666]: Received disconnect from 103.149.86.230 port 56450:11: Bye Bye [preauth]
Dec 04 10:16:06 compute-0 sshd-session[91666]: Disconnected from invalid user kiosk 103.149.86.230 port 56450 [preauth]
Dec 04 10:16:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:06 compute-0 sudo[91807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cocycrvgwfdpcuzsarigxlxyqbpqtbze ; /usr/bin/python3'
Dec 04 10:16:06 compute-0 sudo[91807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:06 compute-0 python3[91809]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:06 compute-0 podman[91810]: 2025-12-04 10:16:06.287151273 +0000 UTC m=+0.043875451 container create 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 04 10:16:06 compute-0 systemd[1]: Started libpod-conmon-1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae.scope.
Dec 04 10:16:06 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3b8e4a2f9cf72d1be36aab6b98f5adf37dfc41690255df1f1a73195e966cdbd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3b8e4a2f9cf72d1be36aab6b98f5adf37dfc41690255df1f1a73195e966cdbd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:06 compute-0 podman[91810]: 2025-12-04 10:16:06.358968626 +0000 UTC m=+0.115692824 container init 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:06 compute-0 podman[91810]: 2025-12-04 10:16:06.26898037 +0000 UTC m=+0.025704568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:06 compute-0 podman[91810]: 2025-12-04 10:16:06.366227623 +0000 UTC m=+0.122951801 container start 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:06 compute-0 podman[91810]: 2025-12-04 10:16:06.370056157 +0000 UTC m=+0.126780335 container attach 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:16:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v63: 4 pgs: 2 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 04 10:16:06 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3592461387' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 04 10:16:06 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3592461387' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec 04 10:16:06 compute-0 confident_edison[91826]: pool 'images' created
Dec 04 10:16:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec 04 10:16:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:06 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1281447236' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:06 compute-0 ceph-mon[75358]: osdmap e22: 3 total, 3 up, 3 in
Dec 04 10:16:06 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3592461387' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:06 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3592461387' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:06 compute-0 ceph-mon[75358]: osdmap e23: 3 total, 3 up, 3 in
Dec 04 10:16:06 compute-0 systemd[1]: libpod-1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae.scope: Deactivated successfully.
Dec 04 10:16:06 compute-0 podman[91810]: 2025-12-04 10:16:06.87817783 +0000 UTC m=+0.634902018 container died 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3b8e4a2f9cf72d1be36aab6b98f5adf37dfc41690255df1f1a73195e966cdbd-merged.mount: Deactivated successfully.
Dec 04 10:16:06 compute-0 podman[91810]: 2025-12-04 10:16:06.921083318 +0000 UTC m=+0.677807496 container remove 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:16:06 compute-0 systemd[1]: libpod-conmon-1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae.scope: Deactivated successfully.
Dec 04 10:16:06 compute-0 sudo[91807]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:07 compute-0 sudo[91888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shozvmeocqfpykpcwsbunlibkuuxyziz ; /usr/bin/python3'
Dec 04 10:16:07 compute-0 sudo[91888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:07 compute-0 python3[91890]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:07 compute-0 podman[91891]: 2025-12-04 10:16:07.358201127 +0000 UTC m=+0.074565910 container create 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 04 10:16:07 compute-0 systemd[1]: Started libpod-conmon-9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a.scope.
Dec 04 10:16:07 compute-0 podman[91891]: 2025-12-04 10:16:07.329792494 +0000 UTC m=+0.046157357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:07 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5455e618b80291eb466ce001b4f9a04fdd0967ab602cd8c4c3e70979df3c7c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5455e618b80291eb466ce001b4f9a04fdd0967ab602cd8c4c3e70979df3c7c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:07 compute-0 podman[91891]: 2025-12-04 10:16:07.458883615 +0000 UTC m=+0.175248408 container init 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:07 compute-0 podman[91891]: 2025-12-04 10:16:07.466965743 +0000 UTC m=+0.183330526 container start 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 04 10:16:07 compute-0 podman[91891]: 2025-12-04 10:16:07.471423721 +0000 UTC m=+0.187788514 container attach 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:16:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 04 10:16:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec 04 10:16:07 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec 04 10:16:07 compute-0 ceph-mon[75358]: pgmap v63: 4 pgs: 2 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:07 compute-0 ceph-mon[75358]: osdmap e24: 3 total, 3 up, 3 in
Dec 04 10:16:07 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 04 10:16:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4151799274' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v66: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 04 10:16:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4151799274' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:08 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4151799274' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec 04 10:16:08 compute-0 amazing_shamir[91906]: pool 'cephfs.cephfs.meta' created
Dec 04 10:16:08 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec 04 10:16:08 compute-0 systemd[1]: libpod-9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a.scope: Deactivated successfully.
Dec 04 10:16:08 compute-0 podman[91891]: 2025-12-04 10:16:08.924814769 +0000 UTC m=+1.641179652 container died 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b5455e618b80291eb466ce001b4f9a04fdd0967ab602cd8c4c3e70979df3c7c-merged.mount: Deactivated successfully.
Dec 04 10:16:08 compute-0 podman[91891]: 2025-12-04 10:16:08.97895319 +0000 UTC m=+1.695318003 container remove 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:09 compute-0 systemd[1]: libpod-conmon-9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a.scope: Deactivated successfully.
Dec 04 10:16:09 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:09 compute-0 sudo[91888]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:09 compute-0 sudo[91967]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsbhpfgxwqjvcitoaevdngtzzqpyrvay ; /usr/bin/python3'
Dec 04 10:16:09 compute-0 sudo[91967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:09 compute-0 python3[91969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:09 compute-0 podman[91970]: 2025-12-04 10:16:09.39434832 +0000 UTC m=+0.079021990 container create 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:16:09 compute-0 systemd[1]: Started libpod-conmon-1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d.scope.
Dec 04 10:16:09 compute-0 podman[91970]: 2025-12-04 10:16:09.343877128 +0000 UTC m=+0.028550898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:09 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e61d03706de4c5b994d95b6dacd7ac54ad6422715cbd9d67c216f52a12632f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e61d03706de4c5b994d95b6dacd7ac54ad6422715cbd9d67c216f52a12632f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:09 compute-0 podman[91970]: 2025-12-04 10:16:09.487371461 +0000 UTC m=+0.172045141 container init 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:16:09 compute-0 podman[91970]: 2025-12-04 10:16:09.496084524 +0000 UTC m=+0.180758214 container start 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:16:09 compute-0 podman[91970]: 2025-12-04 10:16:09.500058511 +0000 UTC m=+0.184732211 container attach 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 04 10:16:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 04 10:16:09 compute-0 ceph-mon[75358]: pgmap v66: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:09 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4151799274' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:09 compute-0 ceph-mon[75358]: osdmap e25: 3 total, 3 up, 3 in
Dec 04 10:16:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec 04 10:16:09 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec 04 10:16:09 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 04 10:16:09 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/523878764' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v69: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 04 10:16:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/523878764' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec 04 10:16:10 compute-0 dreamy_hofstadter[91985]: pool 'cephfs.cephfs.data' created
Dec 04 10:16:10 compute-0 ceph-mon[75358]: osdmap e26: 3 total, 3 up, 3 in
Dec 04 10:16:10 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/523878764' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 04 10:16:10 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec 04 10:16:10 compute-0 systemd[1]: libpod-1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d.scope: Deactivated successfully.
Dec 04 10:16:10 compute-0 podman[91970]: 2025-12-04 10:16:10.961976137 +0000 UTC m=+1.646649857 container died 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e61d03706de4c5b994d95b6dacd7ac54ad6422715cbd9d67c216f52a12632f3-merged.mount: Deactivated successfully.
Dec 04 10:16:11 compute-0 podman[91970]: 2025-12-04 10:16:11.022645567 +0000 UTC m=+1.707319247 container remove 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:16:11 compute-0 systemd[1]: libpod-conmon-1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d.scope: Deactivated successfully.
Dec 04 10:16:11 compute-0 sudo[91967]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:11 compute-0 sudo[92046]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btsxpmghjebmbkayhgkqiqavnawteezt ; /usr/bin/python3'
Dec 04 10:16:11 compute-0 sudo[92046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:11 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:11 compute-0 python3[92048]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:11 compute-0 podman[92049]: 2025-12-04 10:16:11.544704131 +0000 UTC m=+0.088408219 container create 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:11 compute-0 systemd[1]: Started libpod-conmon-55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc.scope.
Dec 04 10:16:11 compute-0 podman[92049]: 2025-12-04 10:16:11.507826971 +0000 UTC m=+0.051530909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:11 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61474271913f5670396aabfd68589b1bb56dbaca61704e5d1ebbaea318cdcba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61474271913f5670396aabfd68589b1bb56dbaca61704e5d1ebbaea318cdcba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:11 compute-0 podman[92049]: 2025-12-04 10:16:11.657432253 +0000 UTC m=+0.201136211 container init 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:11 compute-0 podman[92049]: 2025-12-04 10:16:11.667678843 +0000 UTC m=+0.211382711 container start 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 04 10:16:11 compute-0 podman[92049]: 2025-12-04 10:16:11.672359047 +0000 UTC m=+0.216063015 container attach 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 04 10:16:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec 04 10:16:11 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec 04 10:16:11 compute-0 ceph-mon[75358]: pgmap v69: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/523878764' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 04 10:16:11 compute-0 ceph-mon[75358]: osdmap e27: 3 total, 3 up, 3 in
Dec 04 10:16:11 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 04 10:16:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201750263' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 04 10:16:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 04 10:16:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201750263' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 04 10:16:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec 04 10:16:12 compute-0 sweet_liskov[92065]: enabled application 'rbd' on pool 'vms'
Dec 04 10:16:12 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec 04 10:16:12 compute-0 ceph-mon[75358]: osdmap e28: 3 total, 3 up, 3 in
Dec 04 10:16:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2201750263' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 04 10:16:12 compute-0 systemd[1]: libpod-55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc.scope: Deactivated successfully.
Dec 04 10:16:12 compute-0 podman[92049]: 2025-12-04 10:16:12.986437603 +0000 UTC m=+1.530141501 container died 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d61474271913f5670396aabfd68589b1bb56dbaca61704e5d1ebbaea318cdcba-merged.mount: Deactivated successfully.
Dec 04 10:16:13 compute-0 podman[92049]: 2025-12-04 10:16:13.038502014 +0000 UTC m=+1.582205902 container remove 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:16:13 compute-0 systemd[1]: libpod-conmon-55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc.scope: Deactivated successfully.
Dec 04 10:16:13 compute-0 sudo[92046]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:13 compute-0 sudo[92126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouvreicfymlwuulbyxtruifuvutkvfni ; /usr/bin/python3'
Dec 04 10:16:13 compute-0 sudo[92126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:13 compute-0 python3[92128]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:13 compute-0 podman[92129]: 2025-12-04 10:16:13.451380532 +0000 UTC m=+0.056138691 container create 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:16:13 compute-0 systemd[1]: Started libpod-conmon-42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e.scope.
Dec 04 10:16:13 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:13 compute-0 podman[92129]: 2025-12-04 10:16:13.427087249 +0000 UTC m=+0.031845428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2781a935b2fb38e1cd213626a7625e12ffeb1fcaee2921c6acc23cb64a1667/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2781a935b2fb38e1cd213626a7625e12ffeb1fcaee2921c6acc23cb64a1667/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:13 compute-0 podman[92129]: 2025-12-04 10:16:13.544851364 +0000 UTC m=+0.149609553 container init 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 04 10:16:13 compute-0 podman[92129]: 2025-12-04 10:16:13.555194596 +0000 UTC m=+0.159952735 container start 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:16:13 compute-0 podman[92129]: 2025-12-04 10:16:13.559946602 +0000 UTC m=+0.164704781 container attach 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:13 compute-0 ceph-mon[75358]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:13 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2201750263' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 04 10:16:13 compute-0 ceph-mon[75358]: osdmap e29: 3 total, 3 up, 3 in
Dec 04 10:16:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 04 10:16:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1071403904' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 04 10:16:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 04 10:16:14 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1071403904' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 04 10:16:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1071403904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 04 10:16:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec 04 10:16:14 compute-0 jolly_bell[92144]: enabled application 'rbd' on pool 'volumes'
Dec 04 10:16:14 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec 04 10:16:15 compute-0 systemd[1]: libpod-42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e.scope: Deactivated successfully.
Dec 04 10:16:15 compute-0 podman[92129]: 2025-12-04 10:16:15.013021741 +0000 UTC m=+1.617779910 container died 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff2781a935b2fb38e1cd213626a7625e12ffeb1fcaee2921c6acc23cb64a1667-merged.mount: Deactivated successfully.
Dec 04 10:16:15 compute-0 podman[92129]: 2025-12-04 10:16:15.066944217 +0000 UTC m=+1.671702346 container remove 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:15 compute-0 systemd[1]: libpod-conmon-42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e.scope: Deactivated successfully.
Dec 04 10:16:15 compute-0 sudo[92126]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:15 compute-0 sudo[92204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efycgzescrfmwunlsraietuepaggdiwq ; /usr/bin/python3'
Dec 04 10:16:15 compute-0 sudo[92204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:15 compute-0 python3[92206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:15 compute-0 podman[92207]: 2025-12-04 10:16:15.471914643 +0000 UTC m=+0.076642692 container create 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 04 10:16:15 compute-0 systemd[1]: Started libpod-conmon-39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2.scope.
Dec 04 10:16:15 compute-0 podman[92207]: 2025-12-04 10:16:15.439258866 +0000 UTC m=+0.043986995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:15 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f071d89ae7910af1f71d9d22c94b6aa870db603872c41038f6277659d9009e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f071d89ae7910af1f71d9d22c94b6aa870db603872c41038f6277659d9009e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:15 compute-0 podman[92207]: 2025-12-04 10:16:15.584616504 +0000 UTC m=+0.189344593 container init 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:16:15 compute-0 podman[92207]: 2025-12-04 10:16:15.59552869 +0000 UTC m=+0.200256749 container start 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:16:15 compute-0 podman[92207]: 2025-12-04 10:16:15.598979365 +0000 UTC m=+0.203707434 container attach 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:15 compute-0 ceph-mon[75358]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:15 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1071403904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 04 10:16:15 compute-0 ceph-mon[75358]: osdmap e30: 3 total, 3 up, 3 in
Dec 04 10:16:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 04 10:16:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/733069007' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 04 10:16:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 04 10:16:16 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/733069007' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 04 10:16:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/733069007' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 04 10:16:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 04 10:16:17 compute-0 unruffled_mahavira[92222]: enabled application 'rbd' on pool 'backups'
Dec 04 10:16:17 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 04 10:16:17 compute-0 systemd[1]: libpod-39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2.scope: Deactivated successfully.
Dec 04 10:16:17 compute-0 podman[92207]: 2025-12-04 10:16:17.03348636 +0000 UTC m=+1.638214409 container died 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 04 10:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1f071d89ae7910af1f71d9d22c94b6aa870db603872c41038f6277659d9009e-merged.mount: Deactivated successfully.
Dec 04 10:16:17 compute-0 podman[92207]: 2025-12-04 10:16:17.079847162 +0000 UTC m=+1.684575191 container remove 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:17 compute-0 systemd[1]: libpod-conmon-39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2.scope: Deactivated successfully.
Dec 04 10:16:17 compute-0 sudo[92204]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:17 compute-0 sudo[92283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rypaobarvzehzhjgynfluwfycqyyqiem ; /usr/bin/python3'
Dec 04 10:16:17 compute-0 sudo[92283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:17 compute-0 python3[92285]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:17 compute-0 podman[92286]: 2025-12-04 10:16:17.502310894 +0000 UTC m=+0.059833011 container create a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:17 compute-0 systemd[1]: Started libpod-conmon-a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310.scope.
Dec 04 10:16:17 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:17 compute-0 podman[92286]: 2025-12-04 10:16:17.480478371 +0000 UTC m=+0.038000468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00e82db5a4de706137f620a149e96ac68cbe0645b052fd5fac5f905460061adc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00e82db5a4de706137f620a149e96ac68cbe0645b052fd5fac5f905460061adc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:17 compute-0 podman[92286]: 2025-12-04 10:16:17.593347217 +0000 UTC m=+0.150869314 container init a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:16:17 compute-0 podman[92286]: 2025-12-04 10:16:17.605638497 +0000 UTC m=+0.163160574 container start a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:16:17 compute-0 podman[92286]: 2025-12-04 10:16:17.60908556 +0000 UTC m=+0.166607637 container attach a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:18 compute-0 ceph-mon[75358]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:18 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/733069007' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 04 10:16:18 compute-0 ceph-mon[75358]: osdmap e31: 3 total, 3 up, 3 in
Dec 04 10:16:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 04 10:16:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1490703649' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 04 10:16:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 04 10:16:19 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1490703649' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 04 10:16:19 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1490703649' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 04 10:16:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 04 10:16:19 compute-0 loving_gauss[92301]: enabled application 'rbd' on pool 'images'
Dec 04 10:16:19 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 04 10:16:19 compute-0 systemd[1]: libpod-a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310.scope: Deactivated successfully.
Dec 04 10:16:19 compute-0 podman[92286]: 2025-12-04 10:16:19.054447983 +0000 UTC m=+1.611970060 container died a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-00e82db5a4de706137f620a149e96ac68cbe0645b052fd5fac5f905460061adc-merged.mount: Deactivated successfully.
Dec 04 10:16:19 compute-0 podman[92286]: 2025-12-04 10:16:19.117205484 +0000 UTC m=+1.674727601 container remove a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:19 compute-0 systemd[1]: libpod-conmon-a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310.scope: Deactivated successfully.
Dec 04 10:16:19 compute-0 sudo[92283]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:19 compute-0 sudo[92361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gopbrktcthpsngeahloaomyiwbdhjgas ; /usr/bin/python3'
Dec 04 10:16:19 compute-0 sudo[92361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:19 compute-0 python3[92363]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:19 compute-0 podman[92364]: 2025-12-04 10:16:19.540268961 +0000 UTC m=+0.067609032 container create 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 04 10:16:19 compute-0 systemd[1]: Started libpod-conmon-3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d.scope.
Dec 04 10:16:19 compute-0 podman[92364]: 2025-12-04 10:16:19.510010782 +0000 UTC m=+0.037350953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:19 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd91fea5cf1193072753c8edb535a7700f68d04804dc86ee08edfadc4546eec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd91fea5cf1193072753c8edb535a7700f68d04804dc86ee08edfadc4546eec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:19 compute-0 podman[92364]: 2025-12-04 10:16:19.629278803 +0000 UTC m=+0.156618894 container init 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:19 compute-0 podman[92364]: 2025-12-04 10:16:19.638624572 +0000 UTC m=+0.165964683 container start 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:19 compute-0 podman[92364]: 2025-12-04 10:16:19.644437084 +0000 UTC m=+0.171777185 container attach 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:20 compute-0 ceph-mon[75358]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:20 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1490703649' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 04 10:16:20 compute-0 ceph-mon[75358]: osdmap e32: 3 total, 3 up, 3 in
Dec 04 10:16:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 04 10:16:20 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2254392936' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 04 10:16:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 04 10:16:21 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2254392936' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 04 10:16:21 compute-0 ceph-mon[75358]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2254392936' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 04 10:16:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 04 10:16:21 compute-0 eager_hugle[92379]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 04 10:16:21 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 04 10:16:21 compute-0 systemd[1]: libpod-3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d.scope: Deactivated successfully.
Dec 04 10:16:21 compute-0 podman[92404]: 2025-12-04 10:16:21.115424032 +0000 UTC m=+0.039909413 container died 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 04 10:16:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dd91fea5cf1193072753c8edb535a7700f68d04804dc86ee08edfadc4546eec-merged.mount: Deactivated successfully.
Dec 04 10:16:21 compute-0 podman[92404]: 2025-12-04 10:16:21.159238258 +0000 UTC m=+0.083723649 container remove 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:21 compute-0 systemd[1]: libpod-conmon-3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d.scope: Deactivated successfully.
Dec 04 10:16:21 compute-0 sudo[92361]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:21 compute-0 sudo[92442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkklcatioypchgddaiwgmxjdjxjfhvat ; /usr/bin/python3'
Dec 04 10:16:21 compute-0 sudo[92442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:21 compute-0 python3[92444]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:21 compute-0 podman[92445]: 2025-12-04 10:16:21.616388743 +0000 UTC m=+0.067248717 container create d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:16:21 compute-0 systemd[1]: Started libpod-conmon-d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1.scope.
Dec 04 10:16:21 compute-0 podman[92445]: 2025-12-04 10:16:21.589411797 +0000 UTC m=+0.040271801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:21 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31eb3cd9cf2adca95fb929fdff3b13621196a8e6e919471e655e24911be379d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31eb3cd9cf2adca95fb929fdff3b13621196a8e6e919471e655e24911be379d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:21 compute-0 podman[92445]: 2025-12-04 10:16:21.71894738 +0000 UTC m=+0.169807354 container init d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:16:21 compute-0 podman[92445]: 2025-12-04 10:16:21.730148933 +0000 UTC m=+0.181008937 container start d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:21 compute-0 podman[92445]: 2025-12-04 10:16:21.736149279 +0000 UTC m=+0.187009293 container attach d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:22 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2254392936' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 04 10:16:22 compute-0 ceph-mon[75358]: osdmap e33: 3 total, 3 up, 3 in
Dec 04 10:16:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 04 10:16:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/409955285' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 04 10:16:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 04 10:16:23 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/409955285' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 04 10:16:23 compute-0 ceph-mon[75358]: pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/409955285' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 04 10:16:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 04 10:16:23 compute-0 amazing_napier[92460]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 04 10:16:23 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 04 10:16:23 compute-0 systemd[1]: libpod-d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1.scope: Deactivated successfully.
Dec 04 10:16:23 compute-0 podman[92445]: 2025-12-04 10:16:23.103948118 +0000 UTC m=+1.554808092 container died d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:16:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c31eb3cd9cf2adca95fb929fdff3b13621196a8e6e919471e655e24911be379d-merged.mount: Deactivated successfully.
Dec 04 10:16:23 compute-0 podman[92445]: 2025-12-04 10:16:23.156695091 +0000 UTC m=+1.607555065 container remove d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 10:16:23 compute-0 systemd[1]: libpod-conmon-d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1.scope: Deactivated successfully.
Dec 04 10:16:23 compute-0 sudo[92442]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:24 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/409955285' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 04 10:16:24 compute-0 ceph-mon[75358]: osdmap e34: 3 total, 3 up, 3 in
Dec 04 10:16:24 compute-0 python3[92572]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 10:16:24 compute-0 python3[92643]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843383.9314525-36514-156799691508851/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:16:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:25 compute-0 ceph-mon[75358]: pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:25 compute-0 sudo[92743]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cejmjhyjgpuinsnbddvlvlouiwmwmogi ; /usr/bin/python3'
Dec 04 10:16:25 compute-0 sudo[92743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:25 compute-0 python3[92745]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 10:16:25 compute-0 sudo[92743]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:25 compute-0 sudo[92818]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hifljefkhikacrcawcniypiaogngdfnw ; /usr/bin/python3'
Dec 04 10:16:25 compute-0 sudo[92818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:25 compute-0 python3[92820]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843385.0338879-36528-207500483054033/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=4d95922f97b49ea28e47c382de2b5d80693dc831 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:16:25 compute-0 sudo[92818]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:26 compute-0 sudo[92868]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwammjdpfdckokkebfzwugpxgubbyqte ; /usr/bin/python3'
Dec 04 10:16:26 compute-0 sudo[92868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:26 compute-0 python3[92870]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:26 compute-0 podman[92871]: 2025-12-04 10:16:26.458356129 +0000 UTC m=+0.054813986 container create 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:26 compute-0 systemd[1]: Started libpod-conmon-8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1.scope.
Dec 04 10:16:26 compute-0 podman[92871]: 2025-12-04 10:16:26.430812138 +0000 UTC m=+0.027270015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542c7aed9cf46369471aabb53315199e839e4182aa2f4cef9d9e3f17b7d334da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542c7aed9cf46369471aabb53315199e839e4182aa2f4cef9d9e3f17b7d334da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542c7aed9cf46369471aabb53315199e839e4182aa2f4cef9d9e3f17b7d334da/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:26 compute-0 podman[92871]: 2025-12-04 10:16:26.56854704 +0000 UTC m=+0.165004987 container init 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:26 compute-0 podman[92871]: 2025-12-04 10:16:26.575679293 +0000 UTC m=+0.172137190 container start 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:26 compute-0 podman[92871]: 2025-12-04 10:16:26.581021594 +0000 UTC m=+0.177479491 container attach 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 04 10:16:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:16:26
Dec 04 10:16:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:16:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:16:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'backups']
Dec 04 10:16:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:16:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:26 compute-0 sshd-session[92246]: Invalid user ionadmin from 101.47.163.20 port 34140
Dec 04 10:16:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 04 10:16:26 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2721298245' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 04 10:16:26 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2721298245' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 04 10:16:26 compute-0 wizardly_jemison[92886]: 
Dec 04 10:16:26 compute-0 wizardly_jemison[92886]: [global]
Dec 04 10:16:26 compute-0 wizardly_jemison[92886]:         fsid = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:16:26 compute-0 wizardly_jemison[92886]:         mon_host = 192.168.122.100
Dec 04 10:16:26 compute-0 wizardly_jemison[92886]:         rgw_keystone_api_version = 3
Dec 04 10:16:27 compute-0 systemd[1]: libpod-8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1.scope: Deactivated successfully.
Dec 04 10:16:27 compute-0 podman[92871]: 2025-12-04 10:16:27.024834435 +0000 UTC m=+0.621292322 container died 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-542c7aed9cf46369471aabb53315199e839e4182aa2f4cef9d9e3f17b7d334da-merged.mount: Deactivated successfully.
Dec 04 10:16:27 compute-0 podman[92871]: 2025-12-04 10:16:27.077930317 +0000 UTC m=+0.674388214 container remove 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:16:27 compute-0 systemd[1]: libpod-conmon-8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1.scope: Deactivated successfully.
Dec 04 10:16:27 compute-0 sudo[92911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:27 compute-0 sudo[92911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:27 compute-0 sudo[92911]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:27 compute-0 sudo[92868]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:27 compute-0 sudo[92949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:16:27 compute-0 sudo[92949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:27 compute-0 sudo[92997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-worpdkawqcabjqzsfmsbohnywijybzwl ; /usr/bin/python3'
Dec 04 10:16:27 compute-0 sudo[92997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:27 compute-0 python3[92999]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:27 compute-0 podman[93014]: 2025-12-04 10:16:27.569348838 +0000 UTC m=+0.084014776 container create ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:27 compute-0 systemd[1]: Started libpod-conmon-ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f.scope.
Dec 04 10:16:27 compute-0 podman[93014]: 2025-12-04 10:16:27.532662095 +0000 UTC m=+0.047328143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:27 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0094151e313e4c2f7133d258085f953e99d1d1a781051d2f76309ae100c7ce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0094151e313e4c2f7133d258085f953e99d1d1a781051d2f76309ae100c7ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0094151e313e4c2f7133d258085f953e99d1d1a781051d2f76309ae100c7ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:27 compute-0 podman[93014]: 2025-12-04 10:16:27.670423707 +0000 UTC m=+0.185089685 container init ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:27 compute-0 podman[93014]: 2025-12-04 10:16:27.679498819 +0000 UTC m=+0.194164757 container start ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:27 compute-0 podman[93014]: 2025-12-04 10:16:27.683184738 +0000 UTC m=+0.197850866 container attach ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:27 compute-0 podman[93062]: 2025-12-04 10:16:27.739887738 +0000 UTC m=+0.076255337 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:27 compute-0 ceph-mon[75358]: pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:27 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2721298245' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 04 10:16:27 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2721298245' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 04 10:16:27 compute-0 podman[93062]: 2025-12-04 10:16:27.844826242 +0000 UTC m=+0.181193871 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 04 10:16:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:16:27 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:16:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2436301133' entity='client.admin' 
Dec 04 10:16:28 compute-0 eloquent_easley[93055]: set ssl_option
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:28 compute-0 systemd[1]: libpod-ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f.scope: Deactivated successfully.
Dec 04 10:16:28 compute-0 podman[93181]: 2025-12-04 10:16:28.318222564 +0000 UTC m=+0.031660412 container died ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:16:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b0094151e313e4c2f7133d258085f953e99d1d1a781051d2f76309ae100c7ce-merged.mount: Deactivated successfully.
Dec 04 10:16:28 compute-0 podman[93181]: 2025-12-04 10:16:28.364552751 +0000 UTC m=+0.077990509 container remove ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:16:28 compute-0 systemd[1]: libpod-conmon-ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f.scope: Deactivated successfully.
Dec 04 10:16:28 compute-0 sudo[92997]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:28 compute-0 sudo[93259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnxlfzwceobpnhusvuhyuakelpdkrydp ; /usr/bin/python3'
Dec 04 10:16:28 compute-0 sudo[93259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:28 compute-0 sudo[92949]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:28 compute-0 python3[93269]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:28 compute-0 sudo[93274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:28 compute-0 sudo[93274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:28 compute-0 sudo[93274]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 04 10:16:28 compute-0 podman[93281]: 2025-12-04 10:16:28.760346445 +0000 UTC m=+0.061170020 container create 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2436301133' entity='client.admin' 
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:16:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:28 compute-0 sudo[93310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:16:28 compute-0 sudo[93310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 04 10:16:28 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev a3b094dd-b703-45b1-a600-dd5543626180 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:16:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:28 compute-0 systemd[1]: Started libpod-conmon-4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833.scope.
Dec 04 10:16:28 compute-0 podman[93281]: 2025-12-04 10:16:28.740713477 +0000 UTC m=+0.041537072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fabf50b8e7da1965e960f508bc5535edbb6d3f5bcdac3f00b8326c9e2788f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fabf50b8e7da1965e960f508bc5535edbb6d3f5bcdac3f00b8326c9e2788f1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fabf50b8e7da1965e960f508bc5535edbb6d3f5bcdac3f00b8326c9e2788f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:28 compute-0 podman[93281]: 2025-12-04 10:16:28.898568598 +0000 UTC m=+0.199392173 container init 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:28 compute-0 podman[93281]: 2025-12-04 10:16:28.906472971 +0000 UTC m=+0.207296546 container start 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 04 10:16:28 compute-0 podman[93281]: 2025-12-04 10:16:28.914866275 +0000 UTC m=+0.215689870 container attach 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:29 compute-0 podman[93375]: 2025-12-04 10:16:29.141255525 +0000 UTC m=+0.059865628 container create 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:16:29 compute-0 systemd[1]: Started libpod-conmon-041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a.scope.
Dec 04 10:16:29 compute-0 podman[93375]: 2025-12-04 10:16:29.112890614 +0000 UTC m=+0.031500757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:29 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:29 compute-0 podman[93375]: 2025-12-04 10:16:29.239189789 +0000 UTC m=+0.157799942 container init 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:16:29 compute-0 podman[93375]: 2025-12-04 10:16:29.250721889 +0000 UTC m=+0.169332022 container start 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:29 compute-0 podman[93375]: 2025-12-04 10:16:29.255123876 +0000 UTC m=+0.173734019 container attach 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:16:29 compute-0 vibrant_euclid[93391]: 167 167
Dec 04 10:16:29 compute-0 systemd[1]: libpod-041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a.scope: Deactivated successfully.
Dec 04 10:16:29 compute-0 conmon[93391]: conmon 041d0833b6926fab11b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a.scope/container/memory.events
Dec 04 10:16:29 compute-0 podman[93375]: 2025-12-04 10:16:29.265974791 +0000 UTC m=+0.184584904 container died 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-790bf1ebcae194e50bc492c33b67666964808443711286f93b0d7adb7ec93f55-merged.mount: Deactivated successfully.
Dec 04 10:16:29 compute-0 podman[93375]: 2025-12-04 10:16:29.308943316 +0000 UTC m=+0.227553419 container remove 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:29 compute-0 systemd[1]: libpod-conmon-041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a.scope: Deactivated successfully.
Dec 04 10:16:29 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:16:29 compute-0 ceph-mgr[75651]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec 04 10:16:29 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 04 10:16:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 04 10:16:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:29 compute-0 vigilant_northcutt[93339]: Scheduled rgw.rgw update...
Dec 04 10:16:29 compute-0 systemd[1]: libpod-4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833.scope: Deactivated successfully.
Dec 04 10:16:29 compute-0 podman[93281]: 2025-12-04 10:16:29.375147348 +0000 UTC m=+0.675970923 container died 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec 04 10:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-22fabf50b8e7da1965e960f508bc5535edbb6d3f5bcdac3f00b8326c9e2788f1-merged.mount: Deactivated successfully.
Dec 04 10:16:29 compute-0 podman[93281]: 2025-12-04 10:16:29.413013199 +0000 UTC m=+0.713836774 container remove 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:29 compute-0 sudo[93259]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:29 compute-0 systemd[1]: libpod-conmon-4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833.scope: Deactivated successfully.
Dec 04 10:16:29 compute-0 podman[93429]: 2025-12-04 10:16:29.488129367 +0000 UTC m=+0.047557969 container create 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 10:16:29 compute-0 systemd[1]: Started libpod-conmon-918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9.scope.
Dec 04 10:16:29 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:29 compute-0 podman[93429]: 2025-12-04 10:16:29.467826523 +0000 UTC m=+0.027255135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:29 compute-0 podman[93429]: 2025-12-04 10:16:29.565544532 +0000 UTC m=+0.124973144 container init 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:29 compute-0 podman[93429]: 2025-12-04 10:16:29.582198097 +0000 UTC m=+0.141626689 container start 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 04 10:16:29 compute-0 podman[93429]: 2025-12-04 10:16:29.586301067 +0000 UTC m=+0.145729719 container attach 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 04 10:16:29 compute-0 ceph-mon[75358]: pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:29 compute-0 ceph-mon[75358]: osdmap e35: 3 total, 3 up, 3 in
Dec 04 10:16:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 04 10:16:29 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 04 10:16:29 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev f0b8ae94-a712-4e37-a160-babe7e42db15 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 04 10:16:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:16:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:30 compute-0 infallible_northcutt[93445]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:16:30 compute-0 infallible_northcutt[93445]: --> All data devices are unavailable
Dec 04 10:16:30 compute-0 systemd[1]: libpod-918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9.scope: Deactivated successfully.
Dec 04 10:16:30 compute-0 podman[93429]: 2025-12-04 10:16:30.140618797 +0000 UTC m=+0.700047389 container died 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911-merged.mount: Deactivated successfully.
Dec 04 10:16:30 compute-0 podman[93429]: 2025-12-04 10:16:30.195625716 +0000 UTC m=+0.755054308 container remove 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:16:30 compute-0 systemd[1]: libpod-conmon-918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9.scope: Deactivated successfully.
Dec 04 10:16:30 compute-0 sudo[93310]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:30 compute-0 sudo[93500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:30 compute-0 sudo[93500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:30 compute-0 sudo[93500]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:30 compute-0 sudo[93554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:16:30 compute-0 sudo[93554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:30 compute-0 python3[93602]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 10:16:30 compute-0 podman[93636]: 2025-12-04 10:16:30.627288852 +0000 UTC m=+0.040638480 container create 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:16:30 compute-0 systemd[1]: Started libpod-conmon-2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed.scope.
Dec 04 10:16:30 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:30 compute-0 podman[93636]: 2025-12-04 10:16:30.694712643 +0000 UTC m=+0.108062311 container init 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:16:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:16:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:16:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:30 compute-0 podman[93636]: 2025-12-04 10:16:30.700866713 +0000 UTC m=+0.114216351 container start 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:30 compute-0 podman[93636]: 2025-12-04 10:16:30.609262943 +0000 UTC m=+0.022612611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:30 compute-0 podman[93636]: 2025-12-04 10:16:30.704850049 +0000 UTC m=+0.118199687 container attach 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:16:30 compute-0 serene_almeida[93678]: 167 167
Dec 04 10:16:30 compute-0 systemd[1]: libpod-2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed.scope: Deactivated successfully.
Dec 04 10:16:30 compute-0 podman[93636]: 2025-12-04 10:16:30.708335705 +0000 UTC m=+0.121685363 container died 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-46f71bf5488be3813c144391e2cc903972e1dbed847fde97856b9b9330aad92e-merged.mount: Deactivated successfully.
Dec 04 10:16:30 compute-0 podman[93636]: 2025-12-04 10:16:30.753425342 +0000 UTC m=+0.166774970 container remove 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:30 compute-0 systemd[1]: libpod-conmon-2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed.scope: Deactivated successfully.
Dec 04 10:16:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 04 10:16:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 04 10:16:30 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 04 10:16:30 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=14.048410416s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 67.081153870s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:30 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=14.048410416s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 67.081153870s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:30 compute-0 ceph-mon[75358]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:16:30 compute-0 ceph-mon[75358]: Saving service rgw.rgw spec with placement compute-0
Dec 04 10:16:30 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:30 compute-0 ceph-mon[75358]: osdmap e36: 3 total, 3 up, 3 in
Dec 04 10:16:30 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:30 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:30 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:30 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev fe7e3f41-2f49-4445-9440-8b10495b4a6a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 04 10:16:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:16:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:30 compute-0 python3[93703]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843390.2445598-36569-115274787865741/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:16:30 compute-0 podman[93725]: 2025-12-04 10:16:30.918907389 +0000 UTC m=+0.038981510 container create bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:30 compute-0 systemd[1]: Started libpod-conmon-bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d.scope.
Dec 04 10:16:30 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:30 compute-0 podman[93725]: 2025-12-04 10:16:30.900555343 +0000 UTC m=+0.020629484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:31 compute-0 podman[93725]: 2025-12-04 10:16:31.006460461 +0000 UTC m=+0.126534582 container init bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:31 compute-0 podman[93725]: 2025-12-04 10:16:31.015824799 +0000 UTC m=+0.135898920 container start bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:16:31 compute-0 podman[93725]: 2025-12-04 10:16:31.019970249 +0000 UTC m=+0.140044370 container attach bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 04 10:16:31 compute-0 sudo[93794]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlgbihqrmanoxftlgtqxsiiqizzyofsd ; /usr/bin/python3'
Dec 04 10:16:31 compute-0 sudo[93794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:31 compute-0 competent_satoshi[93766]: {
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:     "0": [
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:         {
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "devices": [
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "/dev/loop3"
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             ],
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_name": "ceph_lv0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_size": "21470642176",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "name": "ceph_lv0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "tags": {
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.crush_device_class": "",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.encrypted": "0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.osd_id": "0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.type": "block",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.vdo": "0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.with_tpm": "0"
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             },
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "type": "block",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "vg_name": "ceph_vg0"
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:         }
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:     ],
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:     "1": [
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:         {
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "devices": [
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "/dev/loop4"
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             ],
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_name": "ceph_lv1",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_size": "21470642176",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "name": "ceph_lv1",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "tags": {
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.crush_device_class": "",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.encrypted": "0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.osd_id": "1",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.type": "block",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.vdo": "0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.with_tpm": "0"
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             },
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "type": "block",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "vg_name": "ceph_vg1"
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:         }
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:     ],
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:     "2": [
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:         {
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "devices": [
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "/dev/loop5"
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             ],
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_name": "ceph_lv2",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_size": "21470642176",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "name": "ceph_lv2",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "tags": {
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.crush_device_class": "",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.encrypted": "0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.osd_id": "2",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.type": "block",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.vdo": "0",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:                 "ceph.with_tpm": "0"
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             },
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "type": "block",
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:             "vg_name": "ceph_vg2"
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:         }
Dec 04 10:16:31 compute-0 competent_satoshi[93766]:     ]
Dec 04 10:16:31 compute-0 competent_satoshi[93766]: }
Dec 04 10:16:31 compute-0 systemd[1]: libpod-bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d.scope: Deactivated successfully.
Dec 04 10:16:31 compute-0 podman[93725]: 2025-12-04 10:16:31.326654094 +0000 UTC m=+0.446728215 container died bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:31 compute-0 python3[93798]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f-merged.mount: Deactivated successfully.
Dec 04 10:16:31 compute-0 podman[93725]: 2025-12-04 10:16:31.37624825 +0000 UTC m=+0.496322371 container remove bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:31 compute-0 systemd[1]: libpod-conmon-bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d.scope: Deactivated successfully.
Dec 04 10:16:31 compute-0 sudo[93554]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:31 compute-0 podman[93814]: 2025-12-04 10:16:31.425086529 +0000 UTC m=+0.051479554 container create ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:16:31 compute-0 systemd[1]: Started libpod-conmon-ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989.scope.
Dec 04 10:16:31 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516d408fbde46f6578d5ff0d3acfcb3eb14a40bfebd6de4b2bf4b8de50ff1771/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516d408fbde46f6578d5ff0d3acfcb3eb14a40bfebd6de4b2bf4b8de50ff1771/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516d408fbde46f6578d5ff0d3acfcb3eb14a40bfebd6de4b2bf4b8de50ff1771/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:31 compute-0 sudo[93827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:31 compute-0 sudo[93827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:31 compute-0 podman[93814]: 2025-12-04 10:16:31.490748997 +0000 UTC m=+0.117142042 container init ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 04 10:16:31 compute-0 sudo[93827]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:31 compute-0 podman[93814]: 2025-12-04 10:16:31.498359182 +0000 UTC m=+0.124752207 container start ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 04 10:16:31 compute-0 podman[93814]: 2025-12-04 10:16:31.405617755 +0000 UTC m=+0.032010810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:31 compute-0 podman[93814]: 2025-12-04 10:16:31.501540469 +0000 UTC m=+0.127933494 container attach ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:16:31 compute-0 sudo[93858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:16:31 compute-0 sudo[93858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 04 10:16:31 compute-0 podman[93913]: 2025-12-04 10:16:31.82042305 +0000 UTC m=+0.042925435 container create 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:16:31 compute-0 sshd-session[92246]: Received disconnect from 101.47.163.20 port 34140:11: Bye Bye [preauth]
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 04 10:16:31 compute-0 sshd-session[92246]: Disconnected from invalid user ionadmin 101.47.163.20 port 34140 [preauth]
Dec 04 10:16:31 compute-0 ceph-mon[75358]: pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:31 compute-0 ceph-mon[75358]: osdmap e37: 3 total, 3 up, 3 in
Dec 04 10:16:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:31 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev f9b75358-ee27-4a3c-ac3f-817e92e49fbe (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37 pruub=14.020574570s) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active pruub 76.672187805s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37 pruub=14.020574570s) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown pruub 76.672187805s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:31 compute-0 systemd[1]: Started libpod-conmon-0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a.scope.
Dec 04 10:16:31 compute-0 podman[93913]: 2025-12-04 10:16:31.798717333 +0000 UTC m=+0.021219748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:31 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:31 compute-0 podman[93913]: 2025-12-04 10:16:31.911886927 +0000 UTC m=+0.134389362 container init 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:16:31 compute-0 podman[93913]: 2025-12-04 10:16:31.91941073 +0000 UTC m=+0.141913115 container start 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:16:31 compute-0 pedantic_cerf[93929]: 167 167
Dec 04 10:16:31 compute-0 systemd[1]: libpod-0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a.scope: Deactivated successfully.
Dec 04 10:16:31 compute-0 podman[93913]: 2025-12-04 10:16:31.923644953 +0000 UTC m=+0.146147358 container attach 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:16:31 compute-0 podman[93913]: 2025-12-04 10:16:31.925256422 +0000 UTC m=+0.147758827 container died 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 04 10:16:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:16:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 04 10:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-29c6a01d7ed7db11ab5549fa07e77339c1a916232d690cdc620789ee804acc08-merged.mount: Deactivated successfully.
Dec 04 10:16:31 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0[75354]: 2025-12-04T10:16:31.946+0000 7f6c157b8640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e2 new map
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-04T10:16:31:947702+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-04T10:16:31.947313+0000
                                           modified        2025-12-04T10:16:31.947313+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 04 10:16:31 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev 0d70b3bf-3b35-43f6-8448-42122400e8e7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-mgr[75651]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 04 10:16:31 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 04 10:16:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=37/39 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:31 compute-0 podman[93913]: 2025-12-04 10:16:31.973208309 +0000 UTC m=+0.195710694 container remove 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 04 10:16:31 compute-0 systemd[1]: libpod-conmon-0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a.scope: Deactivated successfully.
Dec 04 10:16:31 compute-0 systemd[1]: libpod-ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989.scope: Deactivated successfully.
Dec 04 10:16:31 compute-0 podman[93814]: 2025-12-04 10:16:31.997589983 +0000 UTC m=+0.623982998 container died ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 04 10:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-516d408fbde46f6578d5ff0d3acfcb3eb14a40bfebd6de4b2bf4b8de50ff1771-merged.mount: Deactivated successfully.
Dec 04 10:16:32 compute-0 podman[93814]: 2025-12-04 10:16:32.033907317 +0000 UTC m=+0.660300342 container remove ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:32 compute-0 sudo[93794]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:32 compute-0 systemd[1]: libpod-conmon-ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989.scope: Deactivated successfully.
Dec 04 10:16:32 compute-0 podman[93967]: 2025-12-04 10:16:32.14377065 +0000 UTC m=+0.043514870 container create c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:16:32 compute-0 systemd[1]: Started libpod-conmon-c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83.scope.
Dec 04 10:16:32 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:32 compute-0 podman[93967]: 2025-12-04 10:16:32.12568775 +0000 UTC m=+0.025431990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:32 compute-0 podman[93967]: 2025-12-04 10:16:32.228363229 +0000 UTC m=+0.128107479 container init c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:32 compute-0 sudo[94010]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnaufhszgwguntoogkhxringrpsyqytg ; /usr/bin/python3'
Dec 04 10:16:32 compute-0 sudo[94010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:32 compute-0 podman[93967]: 2025-12-04 10:16:32.235642846 +0000 UTC m=+0.135387086 container start c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:32 compute-0 podman[93967]: 2025-12-04 10:16:32.240244198 +0000 UTC m=+0.139988418 container attach c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 04 10:16:32 compute-0 python3[94013]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:32 compute-0 podman[94015]: 2025-12-04 10:16:32.487339813 +0000 UTC m=+0.066406568 container create de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:32 compute-0 systemd[1]: Started libpod-conmon-de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea.scope.
Dec 04 10:16:32 compute-0 podman[94015]: 2025-12-04 10:16:32.459819032 +0000 UTC m=+0.038885827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:32 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60109bb18cd21373232024db857353c9a82aa1e323b7fea219aceb290c7cfb4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60109bb18cd21373232024db857353c9a82aa1e323b7fea219aceb290c7cfb4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60109bb18cd21373232024db857353c9a82aa1e323b7fea219aceb290c7cfb4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:32 compute-0 podman[94015]: 2025-12-04 10:16:32.561668581 +0000 UTC m=+0.140735376 container init de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:32 compute-0 podman[94015]: 2025-12-04 10:16:32.567659557 +0000 UTC m=+0.146726322 container start de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:32 compute-0 podman[94015]: 2025-12-04 10:16:32.573056878 +0000 UTC m=+0.152123653 container attach de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v93: 69 pgs: 1 peering, 31 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:16:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:16:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:16:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:32 compute-0 ceph-mon[75358]: osdmap e38: 3 total, 3 up, 3 in
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 04 10:16:32 compute-0 ceph-mon[75358]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:32 compute-0 ceph-mon[75358]: osdmap e39: 3 total, 3 up, 3 in
Dec 04 10:16:32 compute-0 ceph-mon[75358]: fsmap cephfs:0
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: Saving service mds.cephfs spec with placement compute-0
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress WARNING root] Starting Global Recovery Event,32 pgs not in active + clean state
Dec 04 10:16:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 04 10:16:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 04 10:16:32 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev a9acb27e-f811-43ae-b16b-6b6b4373fc73 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev a3b094dd-b703-45b1-a600-dd5543626180 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event a3b094dd-b703-45b1-a600-dd5543626180 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 4 seconds
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev f0b8ae94-a712-4e37-a160-babe7e42db15 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event f0b8ae94-a712-4e37-a160-babe7e42db15 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 3 seconds
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev fe7e3f41-2f49-4445-9440-8b10495b4a6a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event fe7e3f41-2f49-4445-9440-8b10495b4a6a (PG autoscaler increasing pool 4 PGs from 1 to 32) in 2 seconds
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev f9b75358-ee27-4a3c-ac3f-817e92e49fbe (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event f9b75358-ee27-4a3c-ac3f-817e92e49fbe (PG autoscaler increasing pool 5 PGs from 1 to 32) in 1 seconds
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev 0d70b3bf-3b35-43f6-8448-42122400e8e7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event 0d70b3bf-3b35-43f6-8448-42122400e8e7 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev a9acb27e-f811-43ae-b16b-6b6b4373fc73 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 04 10:16:32 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event a9acb27e-f811-43ae-b16b-6b6b4373fc73 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 04 10:16:32 compute-0 lvm[94123]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:16:32 compute-0 lvm[94126]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:16:32 compute-0 lvm[94123]: VG ceph_vg0 finished
Dec 04 10:16:32 compute-0 lvm[94126]: VG ceph_vg1 finished
Dec 04 10:16:32 compute-0 lvm[94128]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:16:32 compute-0 lvm[94128]: VG ceph_vg2 finished
Dec 04 10:16:33 compute-0 lvm[94130]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:16:33 compute-0 lvm[94130]: VG ceph_vg1 finished
Dec 04 10:16:33 compute-0 lvm[94129]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:16:33 compute-0 lvm[94129]: VG ceph_vg0 finished
Dec 04 10:16:33 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:16:33 compute-0 ceph-mgr[75651]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 04 10:16:33 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 04 10:16:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 04 10:16:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:33 compute-0 beautiful_clarke[94039]: Scheduled mds.cephfs update...
Dec 04 10:16:33 compute-0 systemd[1]: libpod-de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea.scope: Deactivated successfully.
Dec 04 10:16:33 compute-0 podman[94015]: 2025-12-04 10:16:33.078776326 +0000 UTC m=+0.657843091 container died de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:16:33 compute-0 inspiring_snyder[93996]: {}
Dec 04 10:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b60109bb18cd21373232024db857353c9a82aa1e323b7fea219aceb290c7cfb4-merged.mount: Deactivated successfully.
Dec 04 10:16:33 compute-0 sshd-session[93462]: Connection closed by authenticating user root 183.123.27.87 port 50795 [preauth]
Dec 04 10:16:33 compute-0 podman[94015]: 2025-12-04 10:16:33.130201038 +0000 UTC m=+0.709267803 container remove de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:33 compute-0 systemd[1]: libpod-c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83.scope: Deactivated successfully.
Dec 04 10:16:33 compute-0 podman[93967]: 2025-12-04 10:16:33.138227403 +0000 UTC m=+1.037971623 container died c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 04 10:16:33 compute-0 systemd[1]: libpod-c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83.scope: Consumed 1.422s CPU time.
Dec 04 10:16:33 compute-0 systemd[1]: libpod-conmon-de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea.scope: Deactivated successfully.
Dec 04 10:16:33 compute-0 sudo[94010]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74-merged.mount: Deactivated successfully.
Dec 04 10:16:33 compute-0 podman[93967]: 2025-12-04 10:16:33.184452548 +0000 UTC m=+1.084196768 container remove c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:16:33 compute-0 systemd[1]: libpod-conmon-c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83.scope: Deactivated successfully.
Dec 04 10:16:33 compute-0 sudo[93858]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:33 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 40 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=8.615765572s) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active pruub 79.052185059s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:33 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=13.555814743s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active pruub 83.992263794s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:33 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=13.555814743s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown pruub 83.992263794s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:33 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 40 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=8.615765572s) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown pruub 79.052185059s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:33 compute-0 sudo[94158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:16:33 compute-0 sudo[94158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:33 compute-0 sudo[94158]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:33 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40 pruub=14.472810745s) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active pruub 70.113349915s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:33 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40 pruub=14.472810745s) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown pruub 70.113349915s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:33 compute-0 sudo[94183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:33 compute-0 sudo[94183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:33 compute-0 sudo[94183]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:33 compute-0 sudo[94208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:16:33 compute-0 sudo[94208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:33 compute-0 sudo[94310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgiuwmtjvitcolbmxqyvvreguiattffk ; /usr/bin/python3'
Dec 04 10:16:33 compute-0 sudo[94310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:33 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 04 10:16:33 compute-0 python3[94319]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 04 10:16:33 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 04 10:16:33 compute-0 sudo[94310]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:33 compute-0 ceph-mon[75358]: pgmap v93: 69 pgs: 1 peering, 31 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:16:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:33 compute-0 ceph-mon[75358]: osdmap e40: 3 total, 3 up, 3 in
Dec 04 10:16:33 compute-0 ceph-mon[75358]: from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:16:33 compute-0 ceph-mon[75358]: Saving service mds.cephfs spec with placement compute-0
Dec 04 10:16:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:33 compute-0 podman[94363]: 2025-12-04 10:16:33.982289626 +0000 UTC m=+0.064707946 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:34 compute-0 sudo[94449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heonrczwiyjvdwdyvmkdgtzoofqpwcvz ; /usr/bin/python3'
Dec 04 10:16:34 compute-0 sudo[94449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:34 compute-0 podman[94363]: 2025-12-04 10:16:34.113586762 +0000 UTC m=+0.196005092 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:34 compute-0 python3[94452]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843393.5508635-36599-127084991356701/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=78fa63d8c69ed08876e15c6d423f4ac4e13914fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 04 10:16:34 compute-0 sudo[94449]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.15( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.14( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.17( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.16( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.11( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.10( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.13( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.12( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.2( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.3( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.6( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.18( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.7( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.8( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.19( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.4( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.9( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.5( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=40/41 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.16( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.10( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.12( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=40/41 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=40/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.7( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:34 compute-0 sudo[94608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtqplktpaquemitbyyiuiebfzmpkobfv ; /usr/bin/python3'
Dec 04 10:16:34 compute-0 sudo[94608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 04 10:16:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 04 10:16:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v96: 162 pgs: 1 peering, 124 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:34 compute-0 python3[94613]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:34 compute-0 sudo[94208]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:16:34 compute-0 podman[94632]: 2025-12-04 10:16:34.840626526 +0000 UTC m=+0.043088110 container create 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:16:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:34 compute-0 systemd[76741]: Starting Mark boot as successful...
Dec 04 10:16:34 compute-0 systemd[1]: Started libpod-conmon-3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c.scope.
Dec 04 10:16:34 compute-0 systemd[76741]: Finished Mark boot as successful.
Dec 04 10:16:34 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb807216214b36b617ad45464160a5e56536b700032c0ea6cc1694a6b66d628f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb807216214b36b617ad45464160a5e56536b700032c0ea6cc1694a6b66d628f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:34 compute-0 podman[94632]: 2025-12-04 10:16:34.819575134 +0000 UTC m=+0.022036698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:34 compute-0 sudo[94648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:34 compute-0 sudo[94648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:34 compute-0 sudo[94648]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:34 compute-0 sshd-session[94274]: Invalid user cgpexpert from 103.179.218.243 port 41274
Dec 04 10:16:34 compute-0 podman[94632]: 2025-12-04 10:16:34.936720555 +0000 UTC m=+0.139182149 container init 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:34 compute-0 podman[94632]: 2025-12-04 10:16:34.951271049 +0000 UTC m=+0.153732623 container start 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:16:34 compute-0 podman[94632]: 2025-12-04 10:16:34.95581201 +0000 UTC m=+0.158273594 container attach 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:16:34 compute-0 sudo[94676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:16:34 compute-0 sudo[94676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:35 compute-0 sshd-session[94274]: Received disconnect from 103.179.218.243 port 41274:11: Bye Bye [preauth]
Dec 04 10:16:35 compute-0 sshd-session[94274]: Disconnected from invalid user cgpexpert 103.179.218.243 port 41274 [preauth]
Dec 04 10:16:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 04 10:16:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 04 10:16:35 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 04 10:16:35 compute-0 ceph-mon[75358]: 2.1e scrub starts
Dec 04 10:16:35 compute-0 ceph-mon[75358]: 2.1e scrub ok
Dec 04 10:16:35 compute-0 ceph-mon[75358]: osdmap e41: 3 total, 3 up, 3 in
Dec 04 10:16:35 compute-0 ceph-mon[75358]: 3.1f scrub starts
Dec 04 10:16:35 compute-0 ceph-mon[75358]: 3.1f scrub ok
Dec 04 10:16:35 compute-0 ceph-mon[75358]: pgmap v96: 162 pgs: 1 peering, 124 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:16:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:16:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:16:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:16:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:35 compute-0 podman[94734]: 2025-12-04 10:16:35.329509454 +0000 UTC m=+0.056619698 container create 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:16:35 compute-0 systemd[1]: Started libpod-conmon-5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955.scope.
Dec 04 10:16:35 compute-0 podman[94734]: 2025-12-04 10:16:35.304876866 +0000 UTC m=+0.031987130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:35 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:35 compute-0 podman[94734]: 2025-12-04 10:16:35.432707256 +0000 UTC m=+0.159817510 container init 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:35 compute-0 podman[94734]: 2025-12-04 10:16:35.441259394 +0000 UTC m=+0.168369668 container start 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:16:35 compute-0 podman[94734]: 2025-12-04 10:16:35.445651411 +0000 UTC m=+0.172761675 container attach 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:35 compute-0 clever_payne[94750]: 167 167
Dec 04 10:16:35 compute-0 systemd[1]: libpod-5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955.scope: Deactivated successfully.
Dec 04 10:16:35 compute-0 conmon[94750]: conmon 5d1afebf8b4a9c1b5495 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955.scope/container/memory.events
Dec 04 10:16:35 compute-0 podman[94734]: 2025-12-04 10:16:35.451473224 +0000 UTC m=+0.178583518 container died 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Dec 04 10:16:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1896971816' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 04 10:16:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1896971816' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 04 10:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17cfb3fd095b1a8234a7cdfdd1088723414941090c93d536a7d8c4bf260dd79-merged.mount: Deactivated successfully.
Dec 04 10:16:35 compute-0 systemd[1]: libpod-3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c.scope: Deactivated successfully.
Dec 04 10:16:35 compute-0 podman[94734]: 2025-12-04 10:16:35.50186142 +0000 UTC m=+0.228971664 container remove 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:35 compute-0 podman[94632]: 2025-12-04 10:16:35.502606918 +0000 UTC m=+0.705068472 container died 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:35 compute-0 systemd[1]: libpod-conmon-5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955.scope: Deactivated successfully.
Dec 04 10:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb807216214b36b617ad45464160a5e56536b700032c0ea6cc1694a6b66d628f-merged.mount: Deactivated successfully.
Dec 04 10:16:35 compute-0 podman[94632]: 2025-12-04 10:16:35.550034922 +0000 UTC m=+0.752496476 container remove 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 04 10:16:35 compute-0 systemd[1]: libpod-conmon-3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c.scope: Deactivated successfully.
Dec 04 10:16:35 compute-0 sudo[94608]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:35 compute-0 podman[94788]: 2025-12-04 10:16:35.663694528 +0000 UTC m=+0.045975260 container create 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:35 compute-0 systemd[1]: Started libpod-conmon-1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5.scope.
Dec 04 10:16:35 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:35 compute-0 podman[94788]: 2025-12-04 10:16:35.641789145 +0000 UTC m=+0.024069907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:35 compute-0 podman[94788]: 2025-12-04 10:16:35.748983354 +0000 UTC m=+0.131264116 container init 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:16:35 compute-0 podman[94788]: 2025-12-04 10:16:35.756414115 +0000 UTC m=+0.138694847 container start 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:35 compute-0 podman[94788]: 2025-12-04 10:16:35.760162346 +0000 UTC m=+0.142443088 container attach 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 04 10:16:36 compute-0 sudo[94842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttnpatfhpwscetorbpzonbuvszfvzxeh ; /usr/bin/python3'
Dec 04 10:16:36 compute-0 sudo[94842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:36 compute-0 python3[94845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:36 compute-0 angry_satoshi[94805]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:16:36 compute-0 angry_satoshi[94805]: --> All data devices are unavailable
Dec 04 10:16:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:16:36 compute-0 ceph-mon[75358]: osdmap e42: 3 total, 3 up, 3 in
Dec 04 10:16:36 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1896971816' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 04 10:16:36 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1896971816' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 04 10:16:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=42 pruub=15.636316299s) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active pruub 82.768257141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=42 pruub=15.636316299s) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown pruub 82.768257141s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:36 compute-0 systemd[1]: libpod-1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5.scope: Deactivated successfully.
Dec 04 10:16:36 compute-0 podman[94788]: 2025-12-04 10:16:36.333986652 +0000 UTC m=+0.716267384 container died 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:16:36 compute-0 podman[94852]: 2025-12-04 10:16:36.349675444 +0000 UTC m=+0.053208356 container create 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975-merged.mount: Deactivated successfully.
Dec 04 10:16:36 compute-0 systemd[1]: Started libpod-conmon-4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf.scope.
Dec 04 10:16:36 compute-0 podman[94788]: 2025-12-04 10:16:36.395062328 +0000 UTC m=+0.777343060 container remove 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:36 compute-0 systemd[1]: libpod-conmon-1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5.scope: Deactivated successfully.
Dec 04 10:16:36 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cdced4d29d969880fa50369ad7377ef32b31c7dc6041813fefcc3b9d36e2ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cdced4d29d969880fa50369ad7377ef32b31c7dc6041813fefcc3b9d36e2ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:36 compute-0 podman[94852]: 2025-12-04 10:16:36.321063498 +0000 UTC m=+0.024596410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:36 compute-0 podman[94852]: 2025-12-04 10:16:36.433906194 +0000 UTC m=+0.137439156 container init 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:16:36 compute-0 podman[94852]: 2025-12-04 10:16:36.441393766 +0000 UTC m=+0.144926678 container start 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:36 compute-0 sudo[94676]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:36 compute-0 podman[94852]: 2025-12-04 10:16:36.44483382 +0000 UTC m=+0.148366732 container attach 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:36 compute-0 sudo[94886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:36 compute-0 sudo[94886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:36 compute-0 sudo[94886]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:36 compute-0 sudo[94911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:16:36 compute-0 sudo[94911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:36 compute-0 podman[94966]: 2025-12-04 10:16:36.82203007 +0000 UTC m=+0.040829275 container create 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:36 compute-0 systemd[1]: Started libpod-conmon-65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e.scope.
Dec 04 10:16:36 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:36 compute-0 podman[94966]: 2025-12-04 10:16:36.803595921 +0000 UTC m=+0.022395126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:36 compute-0 podman[94966]: 2025-12-04 10:16:36.899935536 +0000 UTC m=+0.118734771 container init 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:36 compute-0 podman[94966]: 2025-12-04 10:16:36.909847987 +0000 UTC m=+0.128647192 container start 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:16:36 compute-0 laughing_bartik[94983]: 167 167
Dec 04 10:16:36 compute-0 systemd[1]: libpod-65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e.scope: Deactivated successfully.
Dec 04 10:16:36 compute-0 podman[94966]: 2025-12-04 10:16:36.91450292 +0000 UTC m=+0.133302155 container attach 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:16:36 compute-0 podman[94966]: 2025-12-04 10:16:36.915015693 +0000 UTC m=+0.133814898 container died 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 10:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d52611072ca93095410e1653f18e792ee0ed55c848a82fb7b40ace31bc026f27-merged.mount: Deactivated successfully.
Dec 04 10:16:36 compute-0 podman[94966]: 2025-12-04 10:16:36.954238428 +0000 UTC m=+0.173037633 container remove 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 04 10:16:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417251768' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:16:36 compute-0 upbeat_hamilton[94882]: 
Dec 04 10:16:36 compute-0 upbeat_hamilton[94882]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":150,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764843345,"num_in_osds":3,"osd_in_since":1764843314,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":124},{"state_name":"active+clean","count":37},{"state_name":"peering","count":1}],"num_pgs":162,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83996672,"bytes_avail":64327929856,"bytes_total":64411926528,"unknown_pgs_ratio":0.76543211936950684,"inactive_pgs_ratio":0.0061728395521640778},"fsmap":{"epoch":2,"btime":"2025-12-04T10:16:31:947702+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-04T10:15:28.674444+0000","services":{}},"progress_events":{"e8fbb843-ac01-485d-b1b9-727e8a8c205a":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 04 10:16:36 compute-0 systemd[1]: libpod-conmon-65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e.scope: Deactivated successfully.
Dec 04 10:16:36 compute-0 systemd[1]: libpod-4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf.scope: Deactivated successfully.
Dec 04 10:16:36 compute-0 podman[94852]: 2025-12-04 10:16:36.984585987 +0000 UTC m=+0.688118969 container died 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-79cdced4d29d969880fa50369ad7377ef32b31c7dc6041813fefcc3b9d36e2ac-merged.mount: Deactivated successfully.
Dec 04 10:16:37 compute-0 podman[94852]: 2025-12-04 10:16:37.039077042 +0000 UTC m=+0.742609964 container remove 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:16:37 compute-0 systemd[1]: libpod-conmon-4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf.scope: Deactivated successfully.
Dec 04 10:16:37 compute-0 sudo[94842]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:37 compute-0 podman[95024]: 2025-12-04 10:16:37.155440475 +0000 UTC m=+0.060266179 container create 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:37 compute-0 systemd[1]: Started libpod-conmon-09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0.scope.
Dec 04 10:16:37 compute-0 podman[95024]: 2025-12-04 10:16:37.126887189 +0000 UTC m=+0.031712893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:37 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:37 compute-0 sudo[95067]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgnkcnmlwipnispggwvffnuvcwyvaffz ; /usr/bin/python3'
Dec 04 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:37 compute-0 sudo[95067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 04 10:16:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 04 10:16:37 compute-0 podman[95024]: 2025-12-04 10:16:37.314042955 +0000 UTC m=+0.218868669 container init 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:16:37 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-mon[75358]: pgmap v98: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:37 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2417251768' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:37 compute-0 podman[95024]: 2025-12-04 10:16:37.32492424 +0000 UTC m=+0.229749954 container start 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=42/43 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 podman[95024]: 2025-12-04 10:16:37.332482703 +0000 UTC m=+0.237308437 container attach 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:37 compute-0 python3[95069]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:37 compute-0 podman[95072]: 2025-12-04 10:16:37.566836967 +0000 UTC m=+0.054684232 container create ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 04 10:16:37 compute-0 systemd[1]: Started libpod-conmon-ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1.scope.
Dec 04 10:16:37 compute-0 podman[95072]: 2025-12-04 10:16:37.538952538 +0000 UTC m=+0.026799873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:37 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1639a6bdb8489828db9cd5ef06923d5676c4e496781122689ab63bc77a888271/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1639a6bdb8489828db9cd5ef06923d5676c4e496781122689ab63bc77a888271/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:37 compute-0 podman[95072]: 2025-12-04 10:16:37.665945099 +0000 UTC m=+0.153792364 container init ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:37 compute-0 podman[95072]: 2025-12-04 10:16:37.674366734 +0000 UTC m=+0.162213999 container start ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:37 compute-0 podman[95072]: 2025-12-04 10:16:37.679383636 +0000 UTC m=+0.167230911 container attach ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 04 10:16:37 compute-0 brave_poincare[95061]: {
Dec 04 10:16:37 compute-0 brave_poincare[95061]:     "0": [
Dec 04 10:16:37 compute-0 brave_poincare[95061]:         {
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "devices": [
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "/dev/loop3"
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             ],
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_name": "ceph_lv0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_size": "21470642176",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "name": "ceph_lv0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "tags": {
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.crush_device_class": "",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.encrypted": "0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.osd_id": "0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.type": "block",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.vdo": "0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.with_tpm": "0"
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             },
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "type": "block",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "vg_name": "ceph_vg0"
Dec 04 10:16:37 compute-0 brave_poincare[95061]:         }
Dec 04 10:16:37 compute-0 brave_poincare[95061]:     ],
Dec 04 10:16:37 compute-0 brave_poincare[95061]:     "1": [
Dec 04 10:16:37 compute-0 brave_poincare[95061]:         {
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "devices": [
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "/dev/loop4"
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             ],
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_name": "ceph_lv1",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_size": "21470642176",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "name": "ceph_lv1",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "tags": {
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.crush_device_class": "",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.encrypted": "0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.osd_id": "1",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.type": "block",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.vdo": "0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.with_tpm": "0"
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             },
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "type": "block",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "vg_name": "ceph_vg1"
Dec 04 10:16:37 compute-0 brave_poincare[95061]:         }
Dec 04 10:16:37 compute-0 brave_poincare[95061]:     ],
Dec 04 10:16:37 compute-0 brave_poincare[95061]:     "2": [
Dec 04 10:16:37 compute-0 brave_poincare[95061]:         {
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "devices": [
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "/dev/loop5"
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             ],
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_name": "ceph_lv2",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_size": "21470642176",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "name": "ceph_lv2",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "tags": {
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.crush_device_class": "",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.encrypted": "0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.osd_id": "2",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.type": "block",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.vdo": "0",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:                 "ceph.with_tpm": "0"
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             },
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "type": "block",
Dec 04 10:16:37 compute-0 brave_poincare[95061]:             "vg_name": "ceph_vg2"
Dec 04 10:16:37 compute-0 brave_poincare[95061]:         }
Dec 04 10:16:37 compute-0 brave_poincare[95061]:     ]
Dec 04 10:16:37 compute-0 brave_poincare[95061]: }
Dec 04 10:16:37 compute-0 systemd[1]: libpod-09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0.scope: Deactivated successfully.
Dec 04 10:16:37 compute-0 podman[95024]: 2025-12-04 10:16:37.719513552 +0000 UTC m=+0.624339236 container died 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:16:37 compute-0 ceph-mgr[75651]: [progress INFO root] Writing back 9 completed events
Dec 04 10:16:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 04 10:16:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:38 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 04 10:16:38 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 04 10:16:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:16:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707924097' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:16:39 compute-0 flamboyant_wilson[95092]: 
Dec 04 10:16:39 compute-0 flamboyant_wilson[95092]: {"epoch":1,"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","modified":"2025-12-04T10:14:01.294217Z","created":"2025-12-04T10:14:01.294217Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec 04 10:16:39 compute-0 flamboyant_wilson[95092]: dumped monmap epoch 1
Dec 04 10:16:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d-merged.mount: Deactivated successfully.
Dec 04 10:16:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 04 10:16:39 compute-0 ceph-mon[75358]: osdmap e43: 3 total, 3 up, 3 in
Dec 04 10:16:39 compute-0 ceph-mon[75358]: pgmap v100: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:39 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/707924097' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:16:39 compute-0 systemd[1]: libpod-ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1.scope: Deactivated successfully.
Dec 04 10:16:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 04 10:16:39 compute-0 podman[95072]: 2025-12-04 10:16:39.601427924 +0000 UTC m=+2.089275219 container died ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:39 compute-0 podman[95024]: 2025-12-04 10:16:39.635045474 +0000 UTC m=+2.539871148 container remove 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1639a6bdb8489828db9cd5ef06923d5676c4e496781122689ab63bc77a888271-merged.mount: Deactivated successfully.
Dec 04 10:16:39 compute-0 podman[95072]: 2025-12-04 10:16:39.676049831 +0000 UTC m=+2.163897086 container remove ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:16:39 compute-0 systemd[1]: libpod-conmon-ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1.scope: Deactivated successfully.
Dec 04 10:16:39 compute-0 systemd[1]: libpod-conmon-09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0.scope: Deactivated successfully.
Dec 04 10:16:39 compute-0 sudo[95067]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:39 compute-0 sudo[94911]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:39 compute-0 sudo[95138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:39 compute-0 sudo[95138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:39 compute-0 sudo[95138]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:39 compute-0 sudo[95163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:16:39 compute-0 sudo[95163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:40 compute-0 sudo[95223]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcorltrnagmxjmwfjuzychmxtyortdya ; /usr/bin/python3'
Dec 04 10:16:40 compute-0 sudo[95223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:40 compute-0 podman[95225]: 2025-12-04 10:16:40.170337491 +0000 UTC m=+0.062637866 container create dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:40 compute-0 systemd[1]: Started libpod-conmon-dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3.scope.
Dec 04 10:16:40 compute-0 podman[95225]: 2025-12-04 10:16:40.143950409 +0000 UTC m=+0.036250854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:40 compute-0 python3[95227]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:40 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:40 compute-0 podman[95225]: 2025-12-04 10:16:40.279651882 +0000 UTC m=+0.171952277 container init dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:16:40 compute-0 podman[95225]: 2025-12-04 10:16:40.286629592 +0000 UTC m=+0.178929967 container start dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:40 compute-0 podman[95225]: 2025-12-04 10:16:40.289646375 +0000 UTC m=+0.181946750 container attach dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:40 compute-0 funny_franklin[95243]: 167 167
Dec 04 10:16:40 compute-0 systemd[1]: libpod-dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3.scope: Deactivated successfully.
Dec 04 10:16:40 compute-0 podman[95225]: 2025-12-04 10:16:40.291864419 +0000 UTC m=+0.184164804 container died dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:16:40 compute-0 podman[95246]: 2025-12-04 10:16:40.312445449 +0000 UTC m=+0.050336146 container create 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:16:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c40fa58852bd049868307ae6738ee77b2d6b93cc992f4cb7ed26e58473b93a3-merged.mount: Deactivated successfully.
Dec 04 10:16:40 compute-0 podman[95225]: 2025-12-04 10:16:40.3531515 +0000 UTC m=+0.245451875 container remove dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:40 compute-0 systemd[1]: Started libpod-conmon-24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93.scope.
Dec 04 10:16:40 compute-0 systemd[1]: libpod-conmon-dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3.scope: Deactivated successfully.
Dec 04 10:16:40 compute-0 podman[95246]: 2025-12-04 10:16:40.287302537 +0000 UTC m=+0.025193254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:40 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d177edd07e8db95ed90229ef8d125a77b700b3bd72e2f4609a177af32fb07ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d177edd07e8db95ed90229ef8d125a77b700b3bd72e2f4609a177af32fb07ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:40 compute-0 podman[95246]: 2025-12-04 10:16:40.414206216 +0000 UTC m=+0.152096933 container init 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:16:40 compute-0 podman[95246]: 2025-12-04 10:16:40.425420739 +0000 UTC m=+0.163311436 container start 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 04 10:16:40 compute-0 podman[95246]: 2025-12-04 10:16:40.432078212 +0000 UTC m=+0.169968909 container attach 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:40 compute-0 podman[95287]: 2025-12-04 10:16:40.51500592 +0000 UTC m=+0.036501470 container create 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:40 compute-0 systemd[1]: Started libpod-conmon-7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37.scope.
Dec 04 10:16:40 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:40 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 04 10:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:40 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 04 10:16:40 compute-0 podman[95287]: 2025-12-04 10:16:40.498075178 +0000 UTC m=+0.019570748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:40 compute-0 podman[95287]: 2025-12-04 10:16:40.602623782 +0000 UTC m=+0.124119372 container init 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:40 compute-0 podman[95287]: 2025-12-04 10:16:40.613827995 +0000 UTC m=+0.135323545 container start 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:40 compute-0 ceph-mon[75358]: 4.18 scrub starts
Dec 04 10:16:40 compute-0 ceph-mon[75358]: 4.18 scrub ok
Dec 04 10:16:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:40 compute-0 ceph-mon[75358]: 3.1c scrub starts
Dec 04 10:16:40 compute-0 ceph-mon[75358]: 3.1c scrub ok
Dec 04 10:16:40 compute-0 podman[95287]: 2025-12-04 10:16:40.6230874 +0000 UTC m=+0.144582970 container attach 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:40 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 04 10:16:40 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 04 10:16:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec 04 10:16:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2551733214' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 04 10:16:40 compute-0 friendly_chandrasekhar[95278]: [client.openstack]
Dec 04 10:16:40 compute-0 friendly_chandrasekhar[95278]:         key = AQC7XjFpAAAAABAAfAp/GPFiYDh+96uFEDn7ew==
Dec 04 10:16:40 compute-0 friendly_chandrasekhar[95278]:         caps mgr = "allow *"
Dec 04 10:16:40 compute-0 friendly_chandrasekhar[95278]:         caps mon = "profile rbd"
Dec 04 10:16:40 compute-0 friendly_chandrasekhar[95278]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 04 10:16:40 compute-0 systemd[1]: libpod-24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93.scope: Deactivated successfully.
Dec 04 10:16:40 compute-0 podman[95246]: 2025-12-04 10:16:40.994940741 +0000 UTC m=+0.732831438 container died 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d177edd07e8db95ed90229ef8d125a77b700b3bd72e2f4609a177af32fb07ff-merged.mount: Deactivated successfully.
Dec 04 10:16:41 compute-0 podman[95246]: 2025-12-04 10:16:41.036494541 +0000 UTC m=+0.774385238 container remove 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:16:41 compute-0 systemd[1]: libpod-conmon-24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93.scope: Deactivated successfully.
Dec 04 10:16:41 compute-0 sudo[95223]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:41 compute-0 lvm[95415]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:16:41 compute-0 lvm[95415]: VG ceph_vg1 finished
Dec 04 10:16:41 compute-0 lvm[95414]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:16:41 compute-0 lvm[95414]: VG ceph_vg0 finished
Dec 04 10:16:41 compute-0 lvm[95417]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:16:41 compute-0 lvm[95417]: VG ceph_vg2 finished
Dec 04 10:16:41 compute-0 hopeful_mirzakhani[95306]: {}
Dec 04 10:16:41 compute-0 systemd[1]: libpod-7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37.scope: Deactivated successfully.
Dec 04 10:16:41 compute-0 systemd[1]: libpod-7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37.scope: Consumed 1.555s CPU time.
Dec 04 10:16:41 compute-0 podman[95287]: 2025-12-04 10:16:41.567803833 +0000 UTC m=+1.089299403 container died 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:16:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 04 10:16:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 04 10:16:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc-merged.mount: Deactivated successfully.
Dec 04 10:16:41 compute-0 podman[95287]: 2025-12-04 10:16:41.619907911 +0000 UTC m=+1.141403461 container remove 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:16:41 compute-0 ceph-mon[75358]: 3.a scrub starts
Dec 04 10:16:41 compute-0 ceph-mon[75358]: 3.a scrub ok
Dec 04 10:16:41 compute-0 ceph-mon[75358]: pgmap v101: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:41 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2551733214' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 04 10:16:41 compute-0 systemd[1]: libpod-conmon-7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37.scope: Deactivated successfully.
Dec 04 10:16:41 compute-0 sudo[95163]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:41 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev bfdba998-7c4a-43bc-88b1-0d08e2109171 (Updating rgw.rgw deployment (+1 -> 1))
Dec 04 10:16:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 04 10:16:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 04 10:16:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 04 10:16:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 04 10:16:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:41 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.jnsliu on compute-0
Dec 04 10:16:41 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.jnsliu on compute-0
Dec 04 10:16:41 compute-0 sudo[95432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:41 compute-0 sudo[95432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:41 compute-0 sudo[95432]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:41 compute-0 sudo[95457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:16:41 compute-0 sudo[95457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:41 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 04 10:16:41 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 04 10:16:42 compute-0 podman[95575]: 2025-12-04 10:16:42.333245722 +0000 UTC m=+0.051661878 container create 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:16:42 compute-0 systemd[1]: Started libpod-conmon-1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c.scope.
Dec 04 10:16:42 compute-0 podman[95575]: 2025-12-04 10:16:42.310482348 +0000 UTC m=+0.028898554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:42 compute-0 podman[95575]: 2025-12-04 10:16:42.442939222 +0000 UTC m=+0.161355388 container init 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 04 10:16:42 compute-0 podman[95575]: 2025-12-04 10:16:42.450986188 +0000 UTC m=+0.169402354 container start 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:16:42 compute-0 podman[95575]: 2025-12-04 10:16:42.455070417 +0000 UTC m=+0.173486603 container attach 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:42 compute-0 stoic_bardeen[95635]: 167 167
Dec 04 10:16:42 compute-0 systemd[1]: libpod-1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c.scope: Deactivated successfully.
Dec 04 10:16:42 compute-0 podman[95575]: 2025-12-04 10:16:42.459993897 +0000 UTC m=+0.178410053 container died 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 04 10:16:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a38778e9a0c361a4c79db6737c39aa353321e500e25bb58a9ae7afc1098ef38f-merged.mount: Deactivated successfully.
Dec 04 10:16:42 compute-0 podman[95575]: 2025-12-04 10:16:42.501435586 +0000 UTC m=+0.219851762 container remove 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:42 compute-0 systemd[1]: libpod-conmon-1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c.scope: Deactivated successfully.
Dec 04 10:16:42 compute-0 systemd[1]: Reloading.
Dec 04 10:16:42 compute-0 systemd-rc-local-generator[95732]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:16:42 compute-0 systemd-sysv-generator[95735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:16:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 04 10:16:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 04 10:16:42 compute-0 ceph-mon[75358]: 2.1f scrub starts
Dec 04 10:16:42 compute-0 ceph-mon[75358]: 2.1f scrub ok
Dec 04 10:16:42 compute-0 ceph-mon[75358]: 3.9 scrub starts
Dec 04 10:16:42 compute-0 ceph-mon[75358]: 3.9 scrub ok
Dec 04 10:16:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 04 10:16:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 04 10:16:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:42 compute-0 ceph-mon[75358]: Deploying daemon rgw.rgw.compute-0.jnsliu on compute-0
Dec 04 10:16:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:16:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:16:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:16:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:16:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:16:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:16:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:42 compute-0 sudo[95705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnwvkbojscdxhixifnxatxjoggmltvgn ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764843402.163865-36671-258255636267341/async_wrapper.py j888336787719 30 /home/zuul/.ansible/tmp/ansible-tmp-1764843402.163865-36671-258255636267341/AnsiballZ_command.py _'
Dec 04 10:16:42 compute-0 sudo[95705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:42 compute-0 systemd[1]: Reloading.
Dec 04 10:16:42 compute-0 systemd-rc-local-generator[95772]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:16:42 compute-0 systemd-sysv-generator[95775]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:16:42 compute-0 ansible-async_wrapper.py[95742]: Invoked with j888336787719 30 /home/zuul/.ansible/tmp/ansible-tmp-1764843402.163865-36671-258255636267341/AnsiballZ_command.py _
Dec 04 10:16:42 compute-0 ansible-async_wrapper.py[95783]: Starting module and watcher
Dec 04 10:16:42 compute-0 ansible-async_wrapper.py[95783]: Start watching 95784 (30)
Dec 04 10:16:42 compute-0 ansible-async_wrapper.py[95784]: Start module (95784)
Dec 04 10:16:42 compute-0 ansible-async_wrapper.py[95742]: Return async_wrapper task started.
Dec 04 10:16:43 compute-0 sudo[95705]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:43 compute-0 python3[95785]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:43 compute-0 podman[95786]: 2025-12-04 10:16:43.32544913 +0000 UTC m=+0.050656434 container create 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:43 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.jnsliu for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:16:43 compute-0 systemd[1]: Started libpod-conmon-56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7.scope.
Dec 04 10:16:43 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109f5a4a9476da841f628b0e61c43377b71ec17db419ee6570e3eef7640b422c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:43 compute-0 podman[95786]: 2025-12-04 10:16:43.306248293 +0000 UTC m=+0.031455627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109f5a4a9476da841f628b0e61c43377b71ec17db419ee6570e3eef7640b422c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:43 compute-0 podman[95786]: 2025-12-04 10:16:43.413334489 +0000 UTC m=+0.138541823 container init 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:43 compute-0 podman[95786]: 2025-12-04 10:16:43.422136693 +0000 UTC m=+0.147343997 container start 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:16:43 compute-0 podman[95786]: 2025-12-04 10:16:43.426649834 +0000 UTC m=+0.151857138 container attach 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Dec 04 10:16:43 compute-0 podman[95873]: 2025-12-04 10:16:43.61016399 +0000 UTC m=+0.039757179 container create 94b64ba6339c9da554f5008c9bb9b6e0be8079586ac8e31d0c89f9aeb8c67181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-rgw-rgw-compute-0-jnsliu, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 04 10:16:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 04 10:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26da74e6664be5dc3b7d8970ed8fd09024cb54994d8bb572e2fc490646def3dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26da74e6664be5dc3b7d8970ed8fd09024cb54994d8bb572e2fc490646def3dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26da74e6664be5dc3b7d8970ed8fd09024cb54994d8bb572e2fc490646def3dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26da74e6664be5dc3b7d8970ed8fd09024cb54994d8bb572e2fc490646def3dd/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.jnsliu supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 04 10:16:43 compute-0 podman[95873]: 2025-12-04 10:16:43.67221453 +0000 UTC m=+0.101807739 container init 94b64ba6339c9da554f5008c9bb9b6e0be8079586ac8e31d0c89f9aeb8c67181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-rgw-rgw-compute-0-jnsliu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:43 compute-0 podman[95873]: 2025-12-04 10:16:43.677873038 +0000 UTC m=+0.107466227 container start 94b64ba6339c9da554f5008c9bb9b6e0be8079586ac8e31d0c89f9aeb8c67181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-rgw-rgw-compute-0-jnsliu, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 04 10:16:43 compute-0 bash[95873]: 94b64ba6339c9da554f5008c9bb9b6e0be8079586ac8e31d0c89f9aeb8c67181
Dec 04 10:16:43 compute-0 podman[95873]: 2025-12-04 10:16:43.59168809 +0000 UTC m=+0.021281309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:43 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.jnsliu for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:16:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 04 10:16:43 compute-0 ceph-mon[75358]: 6.15 scrub starts
Dec 04 10:16:43 compute-0 ceph-mon[75358]: 6.15 scrub ok
Dec 04 10:16:43 compute-0 ceph-mon[75358]: 3.1e scrub starts
Dec 04 10:16:43 compute-0 ceph-mon[75358]: 3.1e scrub ok
Dec 04 10:16:43 compute-0 ceph-mon[75358]: pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.233694077s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.785179138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590950966s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142471313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.233659744s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.785179138s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590942383s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142517090s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590877533s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142471313s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590904236s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142517090s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.543291092s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.403182983s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.543252945s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.403175354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.543237686s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.403182983s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.543200493s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.403175354s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.233083725s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784767151s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.233066559s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784767151s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232978821s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784767151s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590917587s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142707825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590906143s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142707825s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232964516s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784767151s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232871056s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784812927s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590806961s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142761230s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232855797s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784812927s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232735634s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784713745s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590793610s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142761230s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232714653s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784713745s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232473373s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784545898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232460976s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784545898s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232380867s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784500122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232365608s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784500122s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590691566s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142860413s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590670586s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142860413s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590633392s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142852783s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590619087s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142852783s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232382774s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784637451s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232368469s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784637451s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232496262s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784820557s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590587616s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142921448s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590498924s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142868042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590560913s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142921448s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590484619s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142868042s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232484818s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784820557s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590525627s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.143043518s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590511322s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.143043518s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590304375s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142936707s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590291023s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142936707s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590256691s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142913818s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231328011s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784034729s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590211868s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142913818s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231087685s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783843994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231075287s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783843994s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550660133s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412078857s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 sudo[95457]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550289154s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.411735535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550636292s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412078857s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550270081s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.411735535s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550599098s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412086487s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550566673s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412086487s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550761223s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412376404s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550747871s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412376404s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550764084s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412422180s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550706863s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412414551s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550726891s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412422180s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231225967s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784080505s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230737686s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783622742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231205940s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784080505s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230720520s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783622742s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594789505s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147804260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230801582s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783828735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230789185s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783828735s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594771385s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147804260s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594658852s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147827148s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230663300s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783843994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594639778s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147827148s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230645180s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783843994s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594615936s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147918701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230287552s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783607483s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594603539s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147918701s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230266571s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783607483s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594456673s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147933960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594440460s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147933960s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230001450s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783546448s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594345093s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147933960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.229974747s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783546448s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594331741s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147933960s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594237328s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147956848s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230120659s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783843994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594224930s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147956848s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230099678s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783843994s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231058121s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784912109s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594215393s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.148155212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594186783s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.148155212s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.229496956s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783584595s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.229449272s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783584595s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594062805s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.148300171s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594047546s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.148300171s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.224846840s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.779197693s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.224827766s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.779197693s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550687790s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412414551s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550971985s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412849426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550957680s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412849426s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550982475s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412879944s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550964355s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412879944s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550533295s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412490845s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550518990s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412490845s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550880432s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412910461s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550874710s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412918091s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550860405s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412910461s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550854683s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412918091s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550910950s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412994385s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550898552s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412994385s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551048279s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413200378s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551035881s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413200378s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550999641s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413192749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550988197s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413192749s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550899506s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413124084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550881386s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413124084s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551022530s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413330078s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551007271s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413330078s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551032066s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413459778s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551012993s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413459778s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551015854s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413459778s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550999641s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413459778s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550935745s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413505554s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550924301s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413505554s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550952911s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413574219s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550939560s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413574219s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550898552s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413589478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550886154s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413589478s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550847054s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413597107s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550811768s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413589478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550799370s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413589478s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550827026s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413597107s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550859451s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413681030s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550596237s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413414001s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550847054s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413681030s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550754547s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413658142s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550552368s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413414001s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550743103s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413658142s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.552104950s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415061951s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550853729s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413825989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.552092552s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415061951s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550838470s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413825989s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.552052498s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415069580s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.552037239s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415069580s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551926613s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415077209s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551891327s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415077209s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550276756s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415191650s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550256729s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415191650s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550207138s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415283203s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550189018s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415283203s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550127983s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415351868s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550107002s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415351868s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550031662s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415351868s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550016403s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415351868s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230355263s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784912109s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.593091011s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147735596s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.593063354s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147735596s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231313705s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784034729s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.14( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549925804s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415359497s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549912453s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415359497s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549885750s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415435791s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549877167s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415435791s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549964905s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415626526s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549933434s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415626526s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.097001076s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066772461s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.096982956s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066772461s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.530227661s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502059937s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.530211449s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502059937s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.17( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.1e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.1d( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.078037262s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066703796s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.078000069s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066703796s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077732086s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066741943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077715874s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066741943s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077480316s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066696167s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077457428s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066696167s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077162743s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066627502s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077148438s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066627502s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512312889s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.501876831s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512302399s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.501876831s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.076994896s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066650391s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.076984406s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066650391s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512131691s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.501861572s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512122154s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.501861572s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512247086s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502059937s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512236595s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502059937s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.076711655s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066612244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.076702118s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066612244s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512184143s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502159119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512175560s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502159119s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511563301s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502067566s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511548996s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502067566s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.075968742s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066665649s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.075948715s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066665649s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511302948s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502204895s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511286736s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502204895s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.514804840s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502189636s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511129379s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502189636s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.075341225s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066543579s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.075322151s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066543579s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074984550s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066413879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074939728s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066413879s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074461937s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066513062s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.510314941s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502410889s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.510289192s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502410889s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.510479927s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502388000s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074448586s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066513062s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.510172844s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502388000s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509974480s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502372742s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509960175s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502372742s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074248314s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066787720s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074235916s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066787720s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073794365s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066368103s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509723663s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502380371s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509709358s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502380371s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073698044s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066368103s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073497772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066261292s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073486328s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066261292s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509485245s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502334595s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509465218s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502334595s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509727478s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502601624s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509716034s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502601624s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073373795s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066360474s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073348045s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066360474s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073848724s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066993713s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509265900s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502418518s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073454857s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066619873s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073829651s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066993713s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509248734s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502418518s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073436737s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066619873s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509227753s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502494812s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072925568s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066230774s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509199142s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502494812s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072909355s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066230774s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072793961s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066223145s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072651863s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066223145s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072630882s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066223145s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072689056s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066223145s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072225571s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066314697s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072202682s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066314697s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.071744919s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066108704s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.071716309s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066108704s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.508111000s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502616882s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.508092880s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502616882s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.508024216s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502563477s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.508002281s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502563477s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.507984161s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502647400s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.507963181s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502647400s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.071427345s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066177368s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.071377754s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066177368s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.503145218s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502670288s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:16:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.502871513s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502670288s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:43 compute-0 radosgw[95892]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:16:43 compute-0 radosgw[95892]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Dec 04 10:16:43 compute-0 radosgw[95892]: framework: beast
Dec 04 10:16:43 compute-0 radosgw[95892]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 04 10:16:43 compute-0 radosgw[95892]: init_numa not setting numa affinity
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:43 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev bfdba998-7c4a-43bc-88b1-0d08e2109171 (Updating rgw.rgw deployment (+1 -> 1))
Dec 04 10:16:43 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event bfdba998-7c4a-43bc-88b1-0d08e2109171 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Dec 04 10:16:43 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec 04 10:16:43 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 04 10:16:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:43 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev 658f0ade-7039-4f35-9ab8-5a45187848c0 (Updating mds.cephfs deployment (+1 -> 1))
Dec 04 10:16:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 04 10:16:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:43 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:43 compute-0 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.zcbnoq on compute-0
Dec 04 10:16:43 compute-0 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.zcbnoq on compute-0
Dec 04 10:16:43 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:16:43 compute-0 brave_yonath[95805]: 
Dec 04 10:16:43 compute-0 brave_yonath[95805]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 04 10:16:43 compute-0 systemd[1]: libpod-56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7.scope: Deactivated successfully.
Dec 04 10:16:43 compute-0 podman[95786]: 2025-12-04 10:16:43.866262022 +0000 UTC m=+0.591469326 container died 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:43 compute-0 sudo[95924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-109f5a4a9476da841f628b0e61c43377b71ec17db419ee6570e3eef7640b422c-merged.mount: Deactivated successfully.
Dec 04 10:16:43 compute-0 sudo[95924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:43 compute-0 sudo[95924]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:43 compute-0 podman[95786]: 2025-12-04 10:16:43.913256366 +0000 UTC m=+0.638463670 container remove 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 04 10:16:43 compute-0 systemd[1]: libpod-conmon-56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7.scope: Deactivated successfully.
Dec 04 10:16:43 compute-0 ansible-async_wrapper.py[95784]: Module complete (95784)
Dec 04 10:16:43 compute-0 sudo[95961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec 04 10:16:43 compute-0 sudo[95961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:44 compute-0 sudo[96071]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drfinmouifxyeoskzvxxgfeaulzhmzjv ; /usr/bin/python3'
Dec 04 10:16:44 compute-0 sudo[96071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:44 compute-0 podman[96070]: 2025-12-04 10:16:44.344236796 +0000 UTC m=+0.048193005 container create c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:16:44 compute-0 systemd[1]: Started libpod-conmon-c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b.scope.
Dec 04 10:16:44 compute-0 podman[96070]: 2025-12-04 10:16:44.319502063 +0000 UTC m=+0.023458302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:44 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:44 compute-0 podman[96070]: 2025-12-04 10:16:44.43807798 +0000 UTC m=+0.142034279 container init c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:44 compute-0 python3[96079]: ansible-ansible.legacy.async_status Invoked with jid=j888336787719.95742 mode=status _async_dir=/root/.ansible_async
Dec 04 10:16:44 compute-0 podman[96070]: 2025-12-04 10:16:44.446631628 +0000 UTC m=+0.150587877 container start c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:16:44 compute-0 podman[96070]: 2025-12-04 10:16:44.450861661 +0000 UTC m=+0.154817900 container attach c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:16:44 compute-0 sudo[96071]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:44 compute-0 modest_proskuriakova[96089]: 167 167
Dec 04 10:16:44 compute-0 systemd[1]: libpod-c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b.scope: Deactivated successfully.
Dec 04 10:16:44 compute-0 podman[96070]: 2025-12-04 10:16:44.457766449 +0000 UTC m=+0.161722688 container died c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 04 10:16:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-83ca26111969d38a78fb7c3b7f3683ba3c2469629cb22f48c67999a23adc7bc5-merged.mount: Deactivated successfully.
Dec 04 10:16:44 compute-0 podman[96070]: 2025-12-04 10:16:44.517944113 +0000 UTC m=+0.221900362 container remove c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:44 compute-0 systemd[1]: libpod-conmon-c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b.scope: Deactivated successfully.
Dec 04 10:16:44 compute-0 systemd[1]: Reloading.
Dec 04 10:16:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:44 compute-0 ceph-mgr[75651]: [progress INFO root] Writing back 10 completed events
Dec 04 10:16:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 04 10:16:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:44 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event e8fbb843-ac01-485d-b1b9-727e8a8c205a (Global Recovery Event) in 12 seconds
Dec 04 10:16:44 compute-0 systemd-sysv-generator[96180]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:16:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 04 10:16:44 compute-0 systemd-rc-local-generator[96177]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:16:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 04 10:16:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 04 10:16:44 compute-0 ceph-mon[75358]: 3.6 scrub starts
Dec 04 10:16:44 compute-0 ceph-mon[75358]: 3.6 scrub ok
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:16:44 compute-0 ceph-mon[75358]: osdmap e44: 3 total, 3 up, 3 in
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:44 compute-0 ceph-mon[75358]: Saving service rgw.rgw spec with placement compute-0
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:44 compute-0 ceph-mon[75358]: Deploying daemon mds.cephfs.compute-0.zcbnoq on compute-0
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:16:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 04 10:16:44 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 04 10:16:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 04 10:16:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2328690103' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[8.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:44 compute-0 sudo[96158]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqvzgiakxncxenosvyzyerzftcgybyfi ; /usr/bin/python3'
Dec 04 10:16:44 compute-0 sudo[96158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:44 compute-0 systemd[1]: Reloading.
Dec 04 10:16:44 compute-0 systemd-rc-local-generator[96218]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:16:44 compute-0 systemd-sysv-generator[96223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:16:45 compute-0 python3[96191]: ansible-ansible.legacy.async_status Invoked with jid=j888336787719.95742 mode=cleanup _async_dir=/root/.ansible_async
Dec 04 10:16:45 compute-0 sudo[96158]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:45 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 04 10:16:45 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 04 10:16:45 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.zcbnoq for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec 04 10:16:45 compute-0 podman[96277]: 2025-12-04 10:16:45.407290228 +0000 UTC m=+0.049469585 container create 8653c026f7d4e01391a33ebd4fc0a5ae26a89370484767be6f2c06ca6b15142b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bdc90da99aae5c4a1ef34ef8720cef0ab08c898b58f8fcf94e302311081ca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bdc90da99aae5c4a1ef34ef8720cef0ab08c898b58f8fcf94e302311081ca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bdc90da99aae5c4a1ef34ef8720cef0ab08c898b58f8fcf94e302311081ca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bdc90da99aae5c4a1ef34ef8720cef0ab08c898b58f8fcf94e302311081ca7/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.zcbnoq supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:45 compute-0 podman[96277]: 2025-12-04 10:16:45.384797181 +0000 UTC m=+0.026976508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:45 compute-0 podman[96277]: 2025-12-04 10:16:45.480437938 +0000 UTC m=+0.122617275 container init 8653c026f7d4e01391a33ebd4fc0a5ae26a89370484767be6f2c06ca6b15142b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:16:45 compute-0 podman[96277]: 2025-12-04 10:16:45.48622948 +0000 UTC m=+0.128408797 container start 8653c026f7d4e01391a33ebd4fc0a5ae26a89370484767be6f2c06ca6b15142b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:45 compute-0 bash[96277]: 8653c026f7d4e01391a33ebd4fc0a5ae26a89370484767be6f2c06ca6b15142b
Dec 04 10:16:45 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.zcbnoq for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec 04 10:16:45 compute-0 ceph-mds[96299]: set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:16:45 compute-0 ceph-mds[96299]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Dec 04 10:16:45 compute-0 ceph-mds[96299]: main not setting numa affinity
Dec 04 10:16:45 compute-0 ceph-mds[96299]: pidfile_write: ignore empty --pid-file
Dec 04 10:16:45 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq[96293]: starting mds.cephfs.compute-0.zcbnoq at 
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Updating MDS map to version 2 from mon.0
Dec 04 10:16:45 compute-0 sudo[96329]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmyrkirggdkqlqxsftyzvfbrlrhfeklx ; /usr/bin/python3'
Dec 04 10:16:45 compute-0 sudo[96329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:45 compute-0 sudo[95961]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev 658f0ade-7039-4f35-9ab8-5a45187848c0 (Updating mds.cephfs deployment (+1 -> 1))
Dec 04 10:16:45 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event 658f0ade-7039-4f35-9ab8-5a45187848c0 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 sudo[96342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:16:45 compute-0 sudo[96342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:45 compute-0 sudo[96342]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:45 compute-0 python3[96341]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:45 compute-0 sudo[96367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:45 compute-0 sudo[96367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:45 compute-0 sudo[96367]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e3 new map
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-04T10:16:45:747724+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-04T10:16:31.947313+0000
                                           modified        2025-12-04T10:16:31.947313+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zcbnoq{-1:14255} state up:standby seq 1 addr [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] compat {c=[1],r=[1],i=[1fff]}]
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Updating MDS map to version 3 from mon.0
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Monitors have assigned me to become a standby
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] up:boot
Dec 04 10:16:45 compute-0 podman[96370]: 2025-12-04 10:16:45.755029471 +0000 UTC m=+0.044311169 container create dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] as mds.0
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.zcbnoq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.zcbnoq"} v 0)
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.zcbnoq"} : dispatch
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e3 all = 0
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e4 new map
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-04T10:16:45:755624+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-04T10:16:31.947313+0000
                                           modified        2025-12-04T10:16:45.755617+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14255}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.zcbnoq{0:14255} state up:creating seq 1 addr [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2328690103' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 04 10:16:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 04 10:16:45 compute-0 ceph-mon[75358]: 7.1e scrub starts
Dec 04 10:16:45 compute-0 ceph-mon[75358]: 7.1e scrub ok
Dec 04 10:16:45 compute-0 ceph-mon[75358]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:16:45 compute-0 ceph-mon[75358]: osdmap e45: 3 total, 3 up, 3 in
Dec 04 10:16:45 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2328690103' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 04 10:16:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Updating MDS map to version 4 from mon.0
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x1
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x100
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x600
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x601
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x602
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x603
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x604
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x605
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x606
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x607
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x608
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x609
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.zcbnoq=up:creating}
Dec 04 10:16:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:45 compute-0 ceph-mds[96299]: mds.0.4 creating_done
Dec 04 10:16:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.zcbnoq is now active in filesystem cephfs as rank 0
Dec 04 10:16:45 compute-0 systemd[1]: Started libpod-conmon-dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7.scope.
Dec 04 10:16:45 compute-0 sudo[96405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:16:45 compute-0 sudo[96405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:45 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991d219d9600e15604264523f2653c63091a6ba250908f051c562a5eafa4c9ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991d219d9600e15604264523f2653c63091a6ba250908f051c562a5eafa4c9ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:45 compute-0 podman[96370]: 2025-12-04 10:16:45.73772138 +0000 UTC m=+0.027003108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:45 compute-0 podman[96370]: 2025-12-04 10:16:45.84291888 +0000 UTC m=+0.132200608 container init dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:45 compute-0 podman[96370]: 2025-12-04 10:16:45.850521066 +0000 UTC m=+0.139802764 container start dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:16:45 compute-0 podman[96370]: 2025-12-04 10:16:45.854114113 +0000 UTC m=+0.143395831 container attach dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:46 compute-0 podman[97071]: 2025-12-04 10:16:46.209207135 +0000 UTC m=+0.047934737 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:16:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:16:46 compute-0 admiring_lamport[96489]: 
Dec 04 10:16:46 compute-0 admiring_lamport[96489]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 04 10:16:46 compute-0 systemd[1]: libpod-dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7.scope: Deactivated successfully.
Dec 04 10:16:46 compute-0 podman[96370]: 2025-12-04 10:16:46.278519162 +0000 UTC m=+0.567800860 container died dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-991d219d9600e15604264523f2653c63091a6ba250908f051c562a5eafa4c9ca-merged.mount: Deactivated successfully.
Dec 04 10:16:46 compute-0 podman[96370]: 2025-12-04 10:16:46.32235928 +0000 UTC m=+0.611640978 container remove dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:46 compute-0 podman[97071]: 2025-12-04 10:16:46.327184447 +0000 UTC m=+0.165912069 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:46 compute-0 systemd[1]: libpod-conmon-dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7.scope: Deactivated successfully.
Dec 04 10:16:46 compute-0 sudo[96329]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:46 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 04 10:16:46 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 04 10:16:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v107: 194 pgs: 1 unknown, 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 2 op/s
Dec 04 10:16:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 04 10:16:46 compute-0 ceph-mon[75358]: 5.1c scrub starts
Dec 04 10:16:46 compute-0 ceph-mon[75358]: 5.1c scrub ok
Dec 04 10:16:46 compute-0 ceph-mon[75358]: mds.? [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] up:boot
Dec 04 10:16:46 compute-0 ceph-mon[75358]: daemon mds.cephfs.compute-0.zcbnoq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 04 10:16:46 compute-0 ceph-mon[75358]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 04 10:16:46 compute-0 ceph-mon[75358]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 04 10:16:46 compute-0 ceph-mon[75358]: Cluster is now healthy
Dec 04 10:16:46 compute-0 ceph-mon[75358]: fsmap cephfs:0 1 up:standby
Dec 04 10:16:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.zcbnoq"} : dispatch
Dec 04 10:16:46 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2328690103' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 04 10:16:46 compute-0 ceph-mon[75358]: osdmap e46: 3 total, 3 up, 3 in
Dec 04 10:16:46 compute-0 ceph-mon[75358]: fsmap cephfs:1 {0=cephfs.compute-0.zcbnoq=up:creating}
Dec 04 10:16:46 compute-0 ceph-mon[75358]: daemon mds.cephfs.compute-0.zcbnoq is now active in filesystem cephfs as rank 0
Dec 04 10:16:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e5 new map
Dec 04 10:16:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-12-04T10:16:46:764153+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-04T10:16:31.947313+0000
                                           modified        2025-12-04T10:16:46.764151+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14255}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14255 members: 14255
                                           [mds.cephfs.compute-0.zcbnoq{0:14255} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 04 10:16:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 04 10:16:46 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Updating MDS map to version 5 from mon.0
Dec 04 10:16:46 compute-0 ceph-mds[96299]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 04 10:16:46 compute-0 ceph-mds[96299]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec 04 10:16:46 compute-0 ceph-mds[96299]: mds.0.4 recovery_done -- successful recovery!
Dec 04 10:16:46 compute-0 ceph-mds[96299]: mds.0.4 active_start
Dec 04 10:16:46 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] up:active
Dec 04 10:16:46 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.zcbnoq=up:active}
Dec 04 10:16:46 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 04 10:16:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 04 10:16:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 04 10:16:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[9.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:46 compute-0 sudo[96405]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:47 compute-0 sudo[97301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkoditkarzbjkqnlejstssgfoqdfgres ; /usr/bin/python3'
Dec 04 10:16:47 compute-0 sudo[97301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:47 compute-0 sudo[97292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:47 compute-0 sudo[97292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:47 compute-0 sudo[97292]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:47 compute-0 sudo[97323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:16:47 compute-0 sudo[97323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:47 compute-0 python3[97320]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:47 compute-0 podman[97348]: 2025-12-04 10:16:47.259865266 +0000 UTC m=+0.040631560 container create 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:16:47 compute-0 systemd[1]: Started libpod-conmon-39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3.scope.
Dec 04 10:16:47 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c443ff4ba18cc4798da2ea3f0aa9c45d52eea1d75a5cce9d5e7e1485f6bddf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c443ff4ba18cc4798da2ea3f0aa9c45d52eea1d75a5cce9d5e7e1485f6bddf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:47 compute-0 podman[97348]: 2025-12-04 10:16:47.333818536 +0000 UTC m=+0.114584860 container init 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:16:47 compute-0 podman[97348]: 2025-12-04 10:16:47.242562275 +0000 UTC m=+0.023328599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:47 compute-0 podman[97348]: 2025-12-04 10:16:47.342248831 +0000 UTC m=+0.123015125 container start 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:47 compute-0 podman[97348]: 2025-12-04 10:16:47.347559861 +0000 UTC m=+0.128326285 container attach 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:47 compute-0 podman[97380]: 2025-12-04 10:16:47.458594353 +0000 UTC m=+0.049536307 container create db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:47 compute-0 systemd[1]: Started libpod-conmon-db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29.scope.
Dec 04 10:16:47 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:47 compute-0 podman[97380]: 2025-12-04 10:16:47.439380755 +0000 UTC m=+0.030322739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:47 compute-0 podman[97380]: 2025-12-04 10:16:47.532897701 +0000 UTC m=+0.123839675 container init db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:47 compute-0 podman[97380]: 2025-12-04 10:16:47.53941869 +0000 UTC m=+0.130360654 container start db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:16:47 compute-0 admiring_heisenberg[97415]: 167 167
Dec 04 10:16:47 compute-0 podman[97380]: 2025-12-04 10:16:47.543369306 +0000 UTC m=+0.134311280 container attach db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 10:16:47 compute-0 systemd[1]: libpod-db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29.scope: Deactivated successfully.
Dec 04 10:16:47 compute-0 conmon[97415]: conmon db9a7e73782855231454 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29.scope/container/memory.events
Dec 04 10:16:47 compute-0 podman[97380]: 2025-12-04 10:16:47.545442866 +0000 UTC m=+0.136384830 container died db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:16:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-904148f26eb140fb51d9e249af113099c5972c942f345fda103dbf8239fff853-merged.mount: Deactivated successfully.
Dec 04 10:16:47 compute-0 podman[97380]: 2025-12-04 10:16:47.588916134 +0000 UTC m=+0.179858088 container remove db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:47 compute-0 systemd[1]: libpod-conmon-db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29.scope: Deactivated successfully.
Dec 04 10:16:47 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:16:47 compute-0 funny_leakey[97363]: 
Dec 04 10:16:47 compute-0 funny_leakey[97363]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Dec 04 10:16:47 compute-0 systemd[1]: libpod-39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3.scope: Deactivated successfully.
Dec 04 10:16:47 compute-0 podman[97348]: 2025-12-04 10:16:47.766279001 +0000 UTC m=+0.547045295 container died 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 04 10:16:47 compute-0 podman[97438]: 2025-12-04 10:16:47.781773908 +0000 UTC m=+0.058525015 container create 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 04 10:16:47 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 04 10:16:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: 7.1d scrub starts
Dec 04 10:16:47 compute-0 ceph-mon[75358]: 7.1d scrub ok
Dec 04 10:16:47 compute-0 ceph-mon[75358]: pgmap v107: 194 pgs: 1 unknown, 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 2 op/s
Dec 04 10:16:47 compute-0 ceph-mon[75358]: mds.? [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] up:active
Dec 04 10:16:47 compute-0 ceph-mon[75358]: fsmap cephfs:1 {0=cephfs.compute-0.zcbnoq=up:active}
Dec 04 10:16:47 compute-0 ceph-mon[75358]: osdmap e47: 3 total, 3 up, 3 in
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:16:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-90c443ff4ba18cc4798da2ea3f0aa9c45d52eea1d75a5cce9d5e7e1485f6bddf-merged.mount: Deactivated successfully.
Dec 04 10:16:47 compute-0 podman[97348]: 2025-12-04 10:16:47.829896109 +0000 UTC m=+0.610662403 container remove 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 04 10:16:47 compute-0 systemd[1]: Started libpod-conmon-84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77.scope.
Dec 04 10:16:47 compute-0 systemd[1]: libpod-conmon-39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3.scope: Deactivated successfully.
Dec 04 10:16:47 compute-0 podman[97438]: 2025-12-04 10:16:47.75512181 +0000 UTC m=+0.031872937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:47 compute-0 sudo[97301]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:47 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:47 compute-0 podman[97438]: 2025-12-04 10:16:47.903261975 +0000 UTC m=+0.180013112 container init 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:16:47 compute-0 podman[97438]: 2025-12-04 10:16:47.912351106 +0000 UTC m=+0.189102213 container start 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 04 10:16:47 compute-0 podman[97438]: 2025-12-04 10:16:47.915533024 +0000 UTC m=+0.192284331 container attach 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:16:47 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 04 10:16:47 compute-0 ansible-async_wrapper.py[95783]: Done in kid B.
Dec 04 10:16:47 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 04 10:16:48 compute-0 kind_poincare[97473]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:16:48 compute-0 kind_poincare[97473]: --> All data devices are unavailable
Dec 04 10:16:48 compute-0 systemd[1]: libpod-84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77.scope: Deactivated successfully.
Dec 04 10:16:48 compute-0 conmon[97473]: conmon 84a7054aa989a38fb209 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77.scope/container/memory.events
Dec 04 10:16:48 compute-0 podman[97438]: 2025-12-04 10:16:48.470385458 +0000 UTC m=+0.747136585 container died 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca-merged.mount: Deactivated successfully.
Dec 04 10:16:48 compute-0 podman[97438]: 2025-12-04 10:16:48.521079152 +0000 UTC m=+0.797830259 container remove 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:16:48 compute-0 systemd[1]: libpod-conmon-84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77.scope: Deactivated successfully.
Dec 04 10:16:48 compute-0 sudo[97323]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:48 compute-0 sudo[97503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:48 compute-0 sudo[97503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:48 compute-0 sudo[97503]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v110: 195 pgs: 1 creating+peering, 194 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 04 10:16:48 compute-0 sudo[97528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:16:48 compute-0 sudo[97528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:48 compute-0 sudo[97576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfthoxfnhzdqedmvlopinqyzsunnvcxh ; /usr/bin/python3'
Dec 04 10:16:48 compute-0 sudo[97576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 04 10:16:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 04 10:16:48 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 04 10:16:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 04 10:16:48 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 04 10:16:48 compute-0 ceph-mon[75358]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:16:48 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 04 10:16:48 compute-0 ceph-mon[75358]: osdmap e48: 3 total, 3 up, 3 in
Dec 04 10:16:48 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 04 10:16:48 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 49 pg[10.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [2] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:48 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 04 10:16:48 compute-0 python3[97578]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:49 compute-0 podman[97592]: 2025-12-04 10:16:49.004408565 +0000 UTC m=+0.047254962 container create c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:16:49 compute-0 podman[97591]: 2025-12-04 10:16:49.019653036 +0000 UTC m=+0.061775095 container create ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:16:49 compute-0 systemd[1]: Started libpod-conmon-c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7.scope.
Dec 04 10:16:49 compute-0 systemd[1]: Started libpod-conmon-ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1.scope.
Dec 04 10:16:49 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:49 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56199c38204e5015d92decb6692e91a5469ad20b95dacec058998e23aea6b784/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56199c38204e5015d92decb6692e91a5469ad20b95dacec058998e23aea6b784/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:49 compute-0 podman[97592]: 2025-12-04 10:16:48.983057745 +0000 UTC m=+0.025904152 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:49 compute-0 podman[97592]: 2025-12-04 10:16:49.084107585 +0000 UTC m=+0.126953992 container init c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:49 compute-0 podman[97591]: 2025-12-04 10:16:49.087190229 +0000 UTC m=+0.129312298 container init ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:49 compute-0 podman[97591]: 2025-12-04 10:16:48.997178589 +0000 UTC m=+0.039300658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:49 compute-0 podman[97591]: 2025-12-04 10:16:49.093507804 +0000 UTC m=+0.135629853 container start ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:49 compute-0 podman[97592]: 2025-12-04 10:16:49.093984325 +0000 UTC m=+0.136830702 container start c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:16:49 compute-0 podman[97591]: 2025-12-04 10:16:49.098012183 +0000 UTC m=+0.140134232 container attach ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:16:49 compute-0 serene_rhodes[97621]: 167 167
Dec 04 10:16:49 compute-0 systemd[1]: libpod-c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7.scope: Deactivated successfully.
Dec 04 10:16:49 compute-0 podman[97592]: 2025-12-04 10:16:49.105537336 +0000 UTC m=+0.148383743 container attach c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:16:49 compute-0 podman[97592]: 2025-12-04 10:16:49.105956417 +0000 UTC m=+0.148802824 container died c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 04 10:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-980f4684966e580ef4244060841fa442939949e625dfd6ac11d2fddd3c026e4d-merged.mount: Deactivated successfully.
Dec 04 10:16:49 compute-0 podman[97592]: 2025-12-04 10:16:49.147372884 +0000 UTC m=+0.190219271 container remove c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:16:49 compute-0 systemd[1]: libpod-conmon-c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7.scope: Deactivated successfully.
Dec 04 10:16:49 compute-0 podman[97667]: 2025-12-04 10:16:49.309733186 +0000 UTC m=+0.045876257 container create b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:16:49 compute-0 systemd[1]: Started libpod-conmon-b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651.scope.
Dec 04 10:16:49 compute-0 podman[97667]: 2025-12-04 10:16:49.288234683 +0000 UTC m=+0.024377764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:49 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:49 compute-0 podman[97667]: 2025-12-04 10:16:49.410652782 +0000 UTC m=+0.146795863 container init b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:16:49 compute-0 podman[97667]: 2025-12-04 10:16:49.42867604 +0000 UTC m=+0.164819111 container start b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:16:49 compute-0 podman[97667]: 2025-12-04 10:16:49.432720019 +0000 UTC m=+0.168863100 container attach b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 04 10:16:49 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:16:49 compute-0 elegant_cartwright[97622]: 
Dec 04 10:16:49 compute-0 elegant_cartwright[97622]: [{"container_id": "821fa491a4b1", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.16%", "created": "2025-12-04T10:14:49.149243Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-04T10:14:49.224268Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.994957Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2025-12-04T10:14:49.022633Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@crash.compute-0", "version": "20.2.0"}, {"container_id": "8653c026f7d4", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.68%", "created": "2025-12-04T10:16:45.505937Z", "daemon_id": "cephfs.compute-0.zcbnoq", "daemon_name": "mds.cephfs.compute-0.zcbnoq", "daemon_type": "mds", "events": ["2025-12-04T10:16:45.585806Z daemon:mds.cephfs.compute-0.zcbnoq [INFO] \"Deployed mds.cephfs.compute-0.zcbnoq on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.995352Z", "memory_usage": 16053698, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2025-12-04T10:16:45.389247Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mds.cephfs.compute-0.zcbnoq", "version": "20.2.0"}, {"container_id": "aa9fc7b1d662", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "13.60%", "created": "2025-12-04T10:14:08.826528Z", "daemon_id": "compute-0.iwufnj", "daemon_name": "mgr.compute-0.iwufnj", "daemon_type": "mgr", "events": ["2025-12-04T10:14:54.221659Z daemon:mgr.compute-0.iwufnj [INFO] \"Reconfigured mgr.compute-0.iwufnj on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.994885Z", "memory_usage": 550292684, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-04T10:14:08.685306Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mgr.compute-0.iwufnj", "version": "20.2.0"}, {"container_id": "5c64ed29fbaf", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.61%", "created": "2025-12-04T10:14:03.447401Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-04T10:14:53.499043Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.994792Z", "memory_request": 2147483648, "memory_usage": 43557847, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2025-12-04T10:14:05.870884Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mon.compute-0", "version": "20.2.0"}, {"container_id": "f4a07ff69694", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.84%", "created": "2025-12-04T10:15:22.376969Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-04T10:15:22.438966Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.995027Z", "memory_request": 4294967296, "memory_usage": 69751275, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-04T10:15:22.242424Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@osd.0", "version": "20.2.0"}, {"container_id": "f6ca53226c0f", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.94%", "created": "2025-12-04T10:15:27.740096Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-04T10:15:28.283050Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.995122Z", "memory_request": 4294967296, "memory_usage": 68325212, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-04T10:15:27.393328Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@osd.1", "version": "20.2.0"}, {"container_id": "743bc5e794db", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.98%", "created": "2025-12-04T10:15:37.213321Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-04T10:15:37.349843Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.995196Z", "memory_request": 4294967296, "memory_usage": 67643637, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-04T10:15:36.977634Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@osd.2", "version": "20.2.0"}, {"container_id": "94b64ba6339c", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "5.06%", "created": "2025-12-04T10:16:43.690894Z", "daemon_id": "rgw.compute-0.jnsliu", "daemon_name": "rgw.rgw.compute-0.jnsliu", "daemon_type": "rgw", "events": ["2025-12-04T10:16:43.790226Z daemon:rgw.rgw.compute-0.jnsliu [INFO] \"Deployed rgw.rgw.compute-0.jnsliu on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-12-04T10:16:46.995267Z", "memory_usage": 56371445, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-12-04T10:16:43.596084Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@rgw.rgw.compute-0.jnsliu", "version": "20.2.0"}]
Dec 04 10:16:49 compute-0 systemd[1]: libpod-ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1.scope: Deactivated successfully.
Dec 04 10:16:49 compute-0 podman[97591]: 2025-12-04 10:16:49.571878106 +0000 UTC m=+0.614000175 container died ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:16:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:49 compute-0 rsyslogd[1007]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "821fa491a4b1", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 04 10:16:49 compute-0 ceph-mgr[75651]: [progress INFO root] Writing back 12 completed events
Dec 04 10:16:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 04 10:16:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-56199c38204e5015d92decb6692e91a5469ad20b95dacec058998e23aea6b784-merged.mount: Deactivated successfully.
Dec 04 10:16:49 compute-0 podman[97591]: 2025-12-04 10:16:49.619626278 +0000 UTC m=+0.661748337 container remove ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:49 compute-0 systemd[1]: libpod-conmon-ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1.scope: Deactivated successfully.
Dec 04 10:16:49 compute-0 sudo[97576]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]: {
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:     "0": [
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:         {
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "devices": [
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "/dev/loop3"
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             ],
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_name": "ceph_lv0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_size": "21470642176",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "name": "ceph_lv0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "tags": {
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.crush_device_class": "",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.encrypted": "0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.osd_id": "0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.type": "block",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.vdo": "0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.with_tpm": "0"
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             },
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "type": "block",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "vg_name": "ceph_vg0"
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:         }
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:     ],
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:     "1": [
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:         {
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "devices": [
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "/dev/loop4"
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             ],
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_name": "ceph_lv1",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_size": "21470642176",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "name": "ceph_lv1",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "tags": {
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.crush_device_class": "",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.encrypted": "0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.osd_id": "1",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.type": "block",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.vdo": "0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.with_tpm": "0"
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             },
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "type": "block",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "vg_name": "ceph_vg1"
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:         }
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:     ],
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:     "2": [
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:         {
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "devices": [
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "/dev/loop5"
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             ],
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_name": "ceph_lv2",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_size": "21470642176",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "name": "ceph_lv2",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "tags": {
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.crush_device_class": "",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.encrypted": "0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.osd_id": "2",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.type": "block",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.vdo": "0",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:                 "ceph.with_tpm": "0"
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             },
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "type": "block",
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:             "vg_name": "ceph_vg2"
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:         }
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]:     ]
Dec 04 10:16:49 compute-0 wonderful_swartz[97684]: }
Dec 04 10:16:49 compute-0 systemd[1]: libpod-b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651.scope: Deactivated successfully.
Dec 04 10:16:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 04 10:16:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 04 10:16:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 04 10:16:49 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 04 10:16:49 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 50 pg[10.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [2] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:49 compute-0 ceph-mon[75358]: 6.1a scrub starts
Dec 04 10:16:49 compute-0 ceph-mon[75358]: 6.1a scrub ok
Dec 04 10:16:49 compute-0 ceph-mon[75358]: pgmap v110: 195 pgs: 1 creating+peering, 194 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 04 10:16:49 compute-0 ceph-mon[75358]: osdmap e49: 3 total, 3 up, 3 in
Dec 04 10:16:49 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 04 10:16:49 compute-0 ceph-mon[75358]: 2.1a scrub starts
Dec 04 10:16:49 compute-0 ceph-mon[75358]: 2.1a scrub ok
Dec 04 10:16:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:49 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 04 10:16:49 compute-0 ceph-mon[75358]: osdmap e50: 3 total, 3 up, 3 in
Dec 04 10:16:49 compute-0 podman[97707]: 2025-12-04 10:16:49.850511488 +0000 UTC m=+0.042278191 container died b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2-merged.mount: Deactivated successfully.
Dec 04 10:16:49 compute-0 podman[97707]: 2025-12-04 10:16:49.906345996 +0000 UTC m=+0.098112719 container remove b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 04 10:16:49 compute-0 systemd[1]: libpod-conmon-b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651.scope: Deactivated successfully.
Dec 04 10:16:49 compute-0 sudo[97528]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:50 compute-0 sudo[97721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:50 compute-0 sudo[97721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:50 compute-0 sudo[97721]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:50 compute-0 sudo[97748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:16:50 compute-0 sudo[97748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:50 compute-0 sshd-session[97730]: Invalid user vtatis from 74.249.218.27 port 54384
Dec 04 10:16:50 compute-0 sshd-session[97730]: Received disconnect from 74.249.218.27 port 54384:11: Bye Bye [preauth]
Dec 04 10:16:50 compute-0 sshd-session[97730]: Disconnected from invalid user vtatis 74.249.218.27 port 54384 [preauth]
Dec 04 10:16:50 compute-0 podman[97787]: 2025-12-04 10:16:50.46078296 +0000 UTC m=+0.058004673 container create a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:50 compute-0 systemd[1]: Started libpod-conmon-a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f.scope.
Dec 04 10:16:50 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:50 compute-0 podman[97787]: 2025-12-04 10:16:50.431907278 +0000 UTC m=+0.029129081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:50 compute-0 podman[97787]: 2025-12-04 10:16:50.536459472 +0000 UTC m=+0.133681205 container init a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:16:50 compute-0 podman[97787]: 2025-12-04 10:16:50.544546049 +0000 UTC m=+0.141767762 container start a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Dec 04 10:16:50 compute-0 podman[97787]: 2025-12-04 10:16:50.549200713 +0000 UTC m=+0.146422516 container attach a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:16:50 compute-0 priceless_curie[97803]: 167 167
Dec 04 10:16:50 compute-0 systemd[1]: libpod-a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f.scope: Deactivated successfully.
Dec 04 10:16:50 compute-0 podman[97787]: 2025-12-04 10:16:50.552427251 +0000 UTC m=+0.149649024 container died a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:50 compute-0 sudo[97837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqpdfsalrpmnktlyjyigzafxzwtzuthy ; /usr/bin/python3'
Dec 04 10:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f56cd112e90e88008c3fa92384ab92ed12c510dfe2b4923653865c46c2ce418d-merged.mount: Deactivated successfully.
Dec 04 10:16:50 compute-0 sudo[97837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:50 compute-0 podman[97787]: 2025-12-04 10:16:50.602756136 +0000 UTC m=+0.199977859 container remove a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:50 compute-0 systemd[1]: libpod-conmon-a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f.scope: Deactivated successfully.
Dec 04 10:16:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v113: 196 pgs: 1 unknown, 1 creating+peering, 194 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 9 op/s
Dec 04 10:16:50 compute-0 python3[97844]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:50 compute-0 ceph-mds[96299]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 04 10:16:50 compute-0 podman[97852]: 2025-12-04 10:16:50.76769535 +0000 UTC m=+0.046046902 container create 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 04 10:16:50 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq[96293]: 2025-12-04T10:16:50.766+0000 7efc31a2c640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 04 10:16:50 compute-0 podman[97859]: 2025-12-04 10:16:50.78785683 +0000 UTC m=+0.046658326 container create d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 04 10:16:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 04 10:16:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 04 10:16:50 compute-0 systemd[1]: Started libpod-conmon-88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533.scope.
Dec 04 10:16:50 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 04 10:16:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 04 10:16:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 04 10:16:50 compute-0 systemd[1]: Started libpod-conmon-d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880.scope.
Dec 04 10:16:50 compute-0 ceph-mon[75358]: from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 04 10:16:50 compute-0 ceph-mon[75358]: osdmap e51: 3 total, 3 up, 3 in
Dec 04 10:16:50 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 04 10:16:50 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:50 compute-0 podman[97852]: 2025-12-04 10:16:50.748009051 +0000 UTC m=+0.026360623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:50 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bb4b2c68cd5b66f140664013c5635d3296284cca13782d87a3e3702bb497fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bb4b2c68cd5b66f140664013c5635d3296284cca13782d87a3e3702bb497fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:50 compute-0 podman[97852]: 2025-12-04 10:16:50.855406214 +0000 UTC m=+0.133757806 container init 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:16:50 compute-0 podman[97859]: 2025-12-04 10:16:50.768075849 +0000 UTC m=+0.026877365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:50 compute-0 podman[97859]: 2025-12-04 10:16:50.865047919 +0000 UTC m=+0.123849445 container init d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:50 compute-0 podman[97852]: 2025-12-04 10:16:50.865304035 +0000 UTC m=+0.143655587 container start 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:50 compute-0 podman[97852]: 2025-12-04 10:16:50.869717622 +0000 UTC m=+0.148069174 container attach 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:50 compute-0 podman[97859]: 2025-12-04 10:16:50.872407238 +0000 UTC m=+0.131208734 container start d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:50 compute-0 podman[97859]: 2025-12-04 10:16:50.876476687 +0000 UTC m=+0.135278183 container attach d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:50 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 04 10:16:50 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 04 10:16:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:16:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 04 10:16:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565692539' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:16:51 compute-0 eager_visvesvaraya[97886]: 
Dec 04 10:16:51 compute-0 eager_visvesvaraya[97886]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":164,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1764843345,"num_in_osds":3,"osd_in_since":1764843314,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194},{"state_name":"creating+peering","count":1},{"state_name":"unknown","count":1}],"num_pgs":196,"num_pools":10,"num_objects":29,"data_bytes":463390,"bytes_used":84447232,"bytes_avail":64327479296,"bytes_total":64411926528,"unknown_pgs_ratio":0.0051020407117903233,"inactive_pgs_ratio":0.0051020407117903233,"read_bytes_sec":1279,"write_bytes_sec":2047,"read_op_per_sec":0,"write_op_per_sec":8},"fsmap":{"epoch":5,"btime":"2025-12-04T10:16:46:764153+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.zcbnoq","status":"up:active","gid":14255}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-12-04T10:16:46.700761+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-0.zcbnoq":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 04 10:16:51 compute-0 systemd[1]: libpod-d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880.scope: Deactivated successfully.
Dec 04 10:16:51 compute-0 podman[97965]: 2025-12-04 10:16:51.474054971 +0000 UTC m=+0.030265808 container died d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1bb4b2c68cd5b66f140664013c5635d3296284cca13782d87a3e3702bb497fa-merged.mount: Deactivated successfully.
Dec 04 10:16:51 compute-0 podman[97965]: 2025-12-04 10:16:51.515796067 +0000 UTC m=+0.072006884 container remove d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 04 10:16:51 compute-0 systemd[1]: libpod-conmon-d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880.scope: Deactivated successfully.
Dec 04 10:16:51 compute-0 sudo[97837]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:51 compute-0 lvm[98003]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:16:51 compute-0 lvm[98002]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:16:51 compute-0 lvm[98003]: VG ceph_vg1 finished
Dec 04 10:16:51 compute-0 lvm[98002]: VG ceph_vg0 finished
Dec 04 10:16:51 compute-0 lvm[98005]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:16:51 compute-0 lvm[98005]: VG ceph_vg2 finished
Dec 04 10:16:51 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 04 10:16:51 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 04 10:16:51 compute-0 zealous_davinci[97884]: {}
Dec 04 10:16:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 04 10:16:51 compute-0 systemd[1]: libpod-88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533.scope: Deactivated successfully.
Dec 04 10:16:51 compute-0 systemd[1]: libpod-88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533.scope: Consumed 1.515s CPU time.
Dec 04 10:16:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 04 10:16:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 04 10:16:51 compute-0 podman[97852]: 2025-12-04 10:16:51.816080265 +0000 UTC m=+1.094431817 container died 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:51 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 04 10:16:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 04 10:16:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 04 10:16:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:16:51 compute-0 ceph-mon[75358]: pgmap v113: 196 pgs: 1 unknown, 1 creating+peering, 194 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 9 op/s
Dec 04 10:16:51 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3565692539' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 04 10:16:51 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 04 10:16:51 compute-0 ceph-mon[75358]: osdmap e52: 3 total, 3 up, 3 in
Dec 04 10:16:51 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 04 10:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2-merged.mount: Deactivated successfully.
Dec 04 10:16:51 compute-0 podman[97852]: 2025-12-04 10:16:51.874264702 +0000 UTC m=+1.152616254 container remove 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:51 compute-0 systemd[1]: libpod-conmon-88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533.scope: Deactivated successfully.
Dec 04 10:16:51 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 04 10:16:51 compute-0 sudo[97748]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:51 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 04 10:16:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:52 compute-0 sudo[98021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:16:52 compute-0 sudo[98021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:52 compute-0 sudo[98021]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:52 compute-0 sudo[98046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:52 compute-0 sudo[98046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:52 compute-0 sudo[98046]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:52 compute-0 sudo[98071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:16:52 compute-0 sudo[98071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:52 compute-0 sudo[98119]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-givwnvoliystaecqstqbowwpsuedtzii ; /usr/bin/python3'
Dec 04 10:16:52 compute-0 sudo[98119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:52 compute-0 python3[98128]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:52 compute-0 podman[98156]: 2025-12-04 10:16:52.660072937 +0000 UTC m=+0.051396283 container create 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:16:52 compute-0 systemd[1]: Started libpod-conmon-75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c.scope.
Dec 04 10:16:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v116: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Dec 04 10:16:52 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4bb803ce8ecf05b62396cd429e69055c2c0074c6609e2d2cc2870716442f04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:52 compute-0 podman[98156]: 2025-12-04 10:16:52.642021107 +0000 UTC m=+0.033344443 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4bb803ce8ecf05b62396cd429e69055c2c0074c6609e2d2cc2870716442f04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:52 compute-0 podman[98179]: 2025-12-04 10:16:52.745969458 +0000 UTC m=+0.081038984 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:16:52 compute-0 podman[98156]: 2025-12-04 10:16:52.749872442 +0000 UTC m=+0.141195798 container init 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:52 compute-0 podman[98156]: 2025-12-04 10:16:52.756548395 +0000 UTC m=+0.147871741 container start 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 04 10:16:52 compute-0 podman[98156]: 2025-12-04 10:16:52.759477236 +0000 UTC m=+0.150800602 container attach 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 04 10:16:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 04 10:16:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 04 10:16:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 04 10:16:52 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 04 10:16:52 compute-0 podman[98179]: 2025-12-04 10:16:52.840806735 +0000 UTC m=+0.175876281 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:52 compute-0 ceph-mon[75358]: 4.17 scrub starts
Dec 04 10:16:52 compute-0 ceph-mon[75358]: 4.17 scrub ok
Dec 04 10:16:52 compute-0 ceph-mon[75358]: 3.1a scrub starts
Dec 04 10:16:52 compute-0 ceph-mon[75358]: 3.1a scrub ok
Dec 04 10:16:52 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:52 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:52 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:52 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 04 10:16:52 compute-0 ceph-mon[75358]: osdmap e53: 3 total, 3 up, 3 in
Dec 04 10:16:53 compute-0 radosgw[95892]: v1 topic migration: starting v1 topic migration..
Dec 04 10:16:53 compute-0 radosgw[95892]: v1 topic migration: finished v1 topic migration
Dec 04 10:16:53 compute-0 radosgw[95892]: framework: beast
Dec 04 10:16:53 compute-0 radosgw[95892]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 04 10:16:53 compute-0 radosgw[95892]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 04 10:16:53 compute-0 radosgw[95892]: starting handler: beast
Dec 04 10:16:53 compute-0 radosgw[95892]: set uid:gid to 167:167 (ceph:ceph)
Dec 04 10:16:53 compute-0 radosgw[95892]: mgrc service_daemon_register rgw.14258 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.jnsliu,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=ec3e4f8e-ad34-4c75-8bd1-299db07ac24d,zone_name=default,zonegroup_id=6e153e17-7b8b-4b77-9534-ddba9e20c703,zonegroup_name=default}
Dec 04 10:16:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 04 10:16:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/101598567' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:16:53 compute-0 suspicious_rosalind[98195]: 
Dec 04 10:16:53 compute-0 suspicious_rosalind[98195]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.jnsliu","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 04 10:16:53 compute-0 systemd[1]: libpod-75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c.scope: Deactivated successfully.
Dec 04 10:16:53 compute-0 podman[98156]: 2025-12-04 10:16:53.190132388 +0000 UTC m=+0.581455724 container died 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b4bb803ce8ecf05b62396cd429e69055c2c0074c6609e2d2cc2870716442f04-merged.mount: Deactivated successfully.
Dec 04 10:16:53 compute-0 podman[98156]: 2025-12-04 10:16:53.231595357 +0000 UTC m=+0.622918693 container remove 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:53 compute-0 systemd[1]: libpod-conmon-75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c.scope: Deactivated successfully.
Dec 04 10:16:53 compute-0 sudo[98119]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:53 compute-0 sudo[98071]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:53 compute-0 sudo[98439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:53 compute-0 sudo[98439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:53 compute-0 sudo[98439]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:53 compute-0 ceph-mon[75358]: 4.16 scrub starts
Dec 04 10:16:53 compute-0 ceph-mon[75358]: 4.16 scrub ok
Dec 04 10:16:53 compute-0 ceph-mon[75358]: pgmap v116: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Dec 04 10:16:53 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/101598567' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 04 10:16:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:53 compute-0 sudo[98464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:16:53 compute-0 sudo[98464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:53 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 04 10:16:53 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 04 10:16:54 compute-0 sudo[98526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwnhfotwhtwzuckpxrsbuhqykdmdoafe ; /usr/bin/python3'
Dec 04 10:16:54 compute-0 sudo[98526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:54 compute-0 python3[98529]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:54 compute-0 podman[98532]: 2025-12-04 10:16:54.472518469 +0000 UTC m=+0.067180196 container create 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:16:54 compute-0 systemd[1]: Started libpod-conmon-61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985.scope.
Dec 04 10:16:54 compute-0 sudo[98464]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:54 compute-0 podman[98532]: 2025-12-04 10:16:54.443692147 +0000 UTC m=+0.038353924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:54 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01891595a3a1098bf5d7ca145449f99ee6e46cc35a07a708f89503a9ad275032/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01891595a3a1098bf5d7ca145449f99ee6e46cc35a07a708f89503a9ad275032/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:16:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:16:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:16:54 compute-0 podman[98532]: 2025-12-04 10:16:54.568985817 +0000 UTC m=+0.163647554 container init 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:16:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:16:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:16:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:16:54 compute-0 podman[98532]: 2025-12-04 10:16:54.580929567 +0000 UTC m=+0.175591294 container start 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:16:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:16:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:54 compute-0 podman[98532]: 2025-12-04 10:16:54.585188231 +0000 UTC m=+0.179850048 container attach 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:54 compute-0 sudo[98566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:54 compute-0 sudo[98566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:54 compute-0 sudo[98566]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v118: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 209 B/s rd, 418 B/s wr, 1 op/s
Dec 04 10:16:54 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 04 10:16:54 compute-0 sudo[98591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:16:54 compute-0 sudo[98591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:54 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 04 10:16:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:16:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:16:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:16:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:16:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec 04 10:16:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773187369' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 04 10:16:55 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 04 10:16:55 compute-0 lucid_johnson[98562]: mimic
Dec 04 10:16:55 compute-0 podman[98648]: 2025-12-04 10:16:55.017310488 +0000 UTC m=+0.053465982 container create b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:16:55 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 04 10:16:55 compute-0 systemd[1]: libpod-61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985.scope: Deactivated successfully.
Dec 04 10:16:55 compute-0 podman[98532]: 2025-12-04 10:16:55.030078609 +0000 UTC m=+0.624740336 container died 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:55 compute-0 systemd[1]: Started libpod-conmon-b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0.scope.
Dec 04 10:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-01891595a3a1098bf5d7ca145449f99ee6e46cc35a07a708f89503a9ad275032-merged.mount: Deactivated successfully.
Dec 04 10:16:55 compute-0 podman[98532]: 2025-12-04 10:16:55.073891725 +0000 UTC m=+0.668553452 container remove 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:16:55 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:55 compute-0 podman[98648]: 2025-12-04 10:16:54.983150456 +0000 UTC m=+0.019306030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:55 compute-0 systemd[1]: libpod-conmon-61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985.scope: Deactivated successfully.
Dec 04 10:16:55 compute-0 podman[98648]: 2025-12-04 10:16:55.089927856 +0000 UTC m=+0.126083360 container init b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:16:55 compute-0 podman[98648]: 2025-12-04 10:16:55.094851295 +0000 UTC m=+0.131006779 container start b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:55 compute-0 sudo[98526]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:55 compute-0 podman[98648]: 2025-12-04 10:16:55.098231358 +0000 UTC m=+0.134386852 container attach b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:55 compute-0 elated_poitras[98674]: 167 167
Dec 04 10:16:55 compute-0 systemd[1]: libpod-b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0.scope: Deactivated successfully.
Dec 04 10:16:55 compute-0 conmon[98674]: conmon b2b10dc1e43c28165a4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0.scope/container/memory.events
Dec 04 10:16:55 compute-0 podman[98684]: 2025-12-04 10:16:55.148134242 +0000 UTC m=+0.031664731 container died b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-38ad49f701e8d88eb95cef9b9753019a749e7ae149baf86cb99b466e0d538b78-merged.mount: Deactivated successfully.
Dec 04 10:16:55 compute-0 podman[98684]: 2025-12-04 10:16:55.181657748 +0000 UTC m=+0.065188217 container remove b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:55 compute-0 systemd[1]: libpod-conmon-b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0.scope: Deactivated successfully.
Dec 04 10:16:55 compute-0 podman[98705]: 2025-12-04 10:16:55.371525039 +0000 UTC m=+0.048240376 container create 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:55 compute-0 systemd[1]: Started libpod-conmon-28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0.scope.
Dec 04 10:16:55 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:55 compute-0 podman[98705]: 2025-12-04 10:16:55.351131062 +0000 UTC m=+0.027846439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:55 compute-0 podman[98705]: 2025-12-04 10:16:55.468400477 +0000 UTC m=+0.145115854 container init 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:16:55 compute-0 podman[98705]: 2025-12-04 10:16:55.477639072 +0000 UTC m=+0.154354419 container start 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:55 compute-0 podman[98705]: 2025-12-04 10:16:55.480906471 +0000 UTC m=+0.157621848 container attach 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 04 10:16:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 04 10:16:55 compute-0 ceph-mon[75358]: 4.15 scrub starts
Dec 04 10:16:55 compute-0 ceph-mon[75358]: 4.15 scrub ok
Dec 04 10:16:55 compute-0 ceph-mon[75358]: pgmap v118: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 209 B/s rd, 418 B/s wr, 1 op/s
Dec 04 10:16:55 compute-0 ceph-mon[75358]: 3.19 scrub starts
Dec 04 10:16:55 compute-0 ceph-mon[75358]: 3.19 scrub ok
Dec 04 10:16:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2773187369' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 04 10:16:55 compute-0 ceph-mon[75358]: 5.1f scrub starts
Dec 04 10:16:55 compute-0 ceph-mon[75358]: 5.1f scrub ok
Dec 04 10:16:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 04 10:16:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 04 10:16:56 compute-0 sudo[98762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmomqmyjxiphmjevlvgbvytjgycfjhns ; /usr/bin/python3'
Dec 04 10:16:56 compute-0 sudo[98762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:16:56 compute-0 mystifying_yonath[98722]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:16:56 compute-0 mystifying_yonath[98722]: --> All data devices are unavailable
Dec 04 10:16:56 compute-0 systemd[1]: libpod-28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0.scope: Deactivated successfully.
Dec 04 10:16:56 compute-0 podman[98705]: 2025-12-04 10:16:56.1164895 +0000 UTC m=+0.793204887 container died 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679-merged.mount: Deactivated successfully.
Dec 04 10:16:56 compute-0 podman[98705]: 2025-12-04 10:16:56.183384899 +0000 UTC m=+0.860100256 container remove 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:56 compute-0 python3[98766]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:16:56 compute-0 systemd[1]: libpod-conmon-28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0.scope: Deactivated successfully.
Dec 04 10:16:56 compute-0 sudo[98591]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:56 compute-0 podman[98780]: 2025-12-04 10:16:56.257972634 +0000 UTC m=+0.049355362 container create 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:56 compute-0 systemd[1]: Started libpod-conmon-761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9.scope.
Dec 04 10:16:56 compute-0 sudo[98791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:56 compute-0 sudo[98791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:56 compute-0 sudo[98791]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:56 compute-0 podman[98780]: 2025-12-04 10:16:56.237093646 +0000 UTC m=+0.028476424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:16:56 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c821843b42aa44b2bc748a8f3e7e145137d752fffe8b1ad9b2fefec1e856b760/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c821843b42aa44b2bc748a8f3e7e145137d752fffe8b1ad9b2fefec1e856b760/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:56 compute-0 podman[98780]: 2025-12-04 10:16:56.360943179 +0000 UTC m=+0.152325927 container init 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:56 compute-0 podman[98780]: 2025-12-04 10:16:56.367151191 +0000 UTC m=+0.158533909 container start 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:16:56 compute-0 sudo[98823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:16:56 compute-0 podman[98780]: 2025-12-04 10:16:56.370707087 +0000 UTC m=+0.162089825 container attach 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 04 10:16:56 compute-0 sudo[98823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:56 compute-0 podman[98881]: 2025-12-04 10:16:56.65430212 +0000 UTC m=+0.043697345 container create 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:16:56 compute-0 systemd[1]: Started libpod-conmon-042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64.scope.
Dec 04 10:16:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v119: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Dec 04 10:16:56 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:56 compute-0 podman[98881]: 2025-12-04 10:16:56.631304039 +0000 UTC m=+0.020699274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:56 compute-0 podman[98881]: 2025-12-04 10:16:56.742917596 +0000 UTC m=+0.132312831 container init 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 04 10:16:56 compute-0 podman[98881]: 2025-12-04 10:16:56.749461516 +0000 UTC m=+0.138856731 container start 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:16:56 compute-0 podman[98881]: 2025-12-04 10:16:56.75331692 +0000 UTC m=+0.142712145 container attach 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:16:56 compute-0 vigorous_bose[98897]: 167 167
Dec 04 10:16:56 compute-0 systemd[1]: libpod-042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64.scope: Deactivated successfully.
Dec 04 10:16:56 compute-0 podman[98881]: 2025-12-04 10:16:56.75497832 +0000 UTC m=+0.144373525 container died 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b4e38d366796cda994eeb5c57c5308fcc7bd71cbb3ce15aa97ff5182f29a0c0-merged.mount: Deactivated successfully.
Dec 04 10:16:56 compute-0 podman[98881]: 2025-12-04 10:16:56.794169494 +0000 UTC m=+0.183564709 container remove 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:16:56 compute-0 systemd[1]: libpod-conmon-042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64.scope: Deactivated successfully.
Dec 04 10:16:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec 04 10:16:56 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59517235' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 04 10:16:56 compute-0 condescending_wilson[98819]: 
Dec 04 10:16:56 compute-0 ceph-mon[75358]: 7.12 scrub starts
Dec 04 10:16:56 compute-0 ceph-mon[75358]: 7.12 scrub ok
Dec 04 10:16:56 compute-0 ceph-mon[75358]: 5.10 scrub starts
Dec 04 10:16:56 compute-0 ceph-mon[75358]: 5.10 scrub ok
Dec 04 10:16:56 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/59517235' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 04 10:16:56 compute-0 condescending_wilson[98819]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Dec 04 10:16:56 compute-0 systemd[1]: libpod-761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9.scope: Deactivated successfully.
Dec 04 10:16:56 compute-0 podman[98780]: 2025-12-04 10:16:56.883697443 +0000 UTC m=+0.675080171 container died 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c821843b42aa44b2bc748a8f3e7e145137d752fffe8b1ad9b2fefec1e856b760-merged.mount: Deactivated successfully.
Dec 04 10:16:56 compute-0 podman[98780]: 2025-12-04 10:16:56.934659053 +0000 UTC m=+0.726041781 container remove 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:56 compute-0 systemd[1]: libpod-conmon-761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9.scope: Deactivated successfully.
Dec 04 10:16:56 compute-0 sudo[98762]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:57 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 04 10:16:57 compute-0 podman[98935]: 2025-12-04 10:16:57.022582963 +0000 UTC m=+0.060960195 container create 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:16:57 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 04 10:16:57 compute-0 systemd[1]: Started libpod-conmon-065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7.scope.
Dec 04 10:16:57 compute-0 podman[98935]: 2025-12-04 10:16:56.995790951 +0000 UTC m=+0.034168183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:57 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:57 compute-0 podman[98935]: 2025-12-04 10:16:57.121317985 +0000 UTC m=+0.159695227 container init 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:57 compute-0 podman[98935]: 2025-12-04 10:16:57.130381976 +0000 UTC m=+0.168759188 container start 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:57 compute-0 podman[98935]: 2025-12-04 10:16:57.133897881 +0000 UTC m=+0.172275093 container attach 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]: {
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:     "0": [
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:         {
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "devices": [
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "/dev/loop3"
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             ],
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_name": "ceph_lv0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_size": "21470642176",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "name": "ceph_lv0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "tags": {
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.crush_device_class": "",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.encrypted": "0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.osd_id": "0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.type": "block",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.vdo": "0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.with_tpm": "0"
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             },
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "type": "block",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "vg_name": "ceph_vg0"
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:         }
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:     ],
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:     "1": [
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:         {
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "devices": [
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "/dev/loop4"
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             ],
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_name": "ceph_lv1",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_size": "21470642176",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "name": "ceph_lv1",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "tags": {
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.crush_device_class": "",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.encrypted": "0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.osd_id": "1",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.type": "block",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.vdo": "0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.with_tpm": "0"
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             },
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "type": "block",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "vg_name": "ceph_vg1"
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:         }
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:     ],
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:     "2": [
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:         {
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "devices": [
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "/dev/loop5"
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             ],
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_name": "ceph_lv2",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_size": "21470642176",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "name": "ceph_lv2",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "tags": {
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.cluster_name": "ceph",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.crush_device_class": "",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.encrypted": "0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.objectstore": "bluestore",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.osd_id": "2",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.type": "block",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.vdo": "0",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:                 "ceph.with_tpm": "0"
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             },
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "type": "block",
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:             "vg_name": "ceph_vg2"
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:         }
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]:     ]
Dec 04 10:16:57 compute-0 flamboyant_swartz[98951]: }
Dec 04 10:16:57 compute-0 systemd[1]: libpod-065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7.scope: Deactivated successfully.
Dec 04 10:16:57 compute-0 podman[98935]: 2025-12-04 10:16:57.502884482 +0000 UTC m=+0.541261714 container died 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:16:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f-merged.mount: Deactivated successfully.
Dec 04 10:16:57 compute-0 podman[98935]: 2025-12-04 10:16:57.567516715 +0000 UTC m=+0.605893967 container remove 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:57 compute-0 systemd[1]: libpod-conmon-065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7.scope: Deactivated successfully.
Dec 04 10:16:57 compute-0 sudo[98823]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:57 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 04 10:16:57 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 04 10:16:57 compute-0 sudo[98971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:16:57 compute-0 sudo[98971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:57 compute-0 sudo[98971]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:57 compute-0 sudo[98996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:16:57 compute-0 sudo[98996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:57 compute-0 ceph-mon[75358]: pgmap v119: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Dec 04 10:16:57 compute-0 ceph-mon[75358]: 2.14 scrub starts
Dec 04 10:16:57 compute-0 ceph-mon[75358]: 2.14 scrub ok
Dec 04 10:16:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:16:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:16:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:16:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:16:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:16:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:16:57 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 04 10:16:58 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 04 10:16:58 compute-0 podman[99033]: 2025-12-04 10:16:58.221459481 +0000 UTC m=+0.059400496 container create 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 04 10:16:58 compute-0 systemd[1]: Started libpod-conmon-13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648.scope.
Dec 04 10:16:58 compute-0 podman[99033]: 2025-12-04 10:16:58.18937813 +0000 UTC m=+0.027319225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:58 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:58 compute-0 podman[99033]: 2025-12-04 10:16:58.348880792 +0000 UTC m=+0.186821837 container init 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:16:58 compute-0 podman[99033]: 2025-12-04 10:16:58.358013665 +0000 UTC m=+0.195954690 container start 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:16:58 compute-0 podman[99033]: 2025-12-04 10:16:58.362929464 +0000 UTC m=+0.200870479 container attach 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 04 10:16:58 compute-0 dazzling_goodall[99049]: 167 167
Dec 04 10:16:58 compute-0 systemd[1]: libpod-13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648.scope: Deactivated successfully.
Dec 04 10:16:58 compute-0 conmon[99049]: conmon 13a6057773824f0ca257 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648.scope/container/memory.events
Dec 04 10:16:58 compute-0 podman[99033]: 2025-12-04 10:16:58.368704084 +0000 UTC m=+0.206645089 container died 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7e1616885745e02e87bf322424567776841f1eab2b09468a2985a670cef01e7-merged.mount: Deactivated successfully.
Dec 04 10:16:58 compute-0 podman[99033]: 2025-12-04 10:16:58.40836117 +0000 UTC m=+0.246302195 container remove 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:58 compute-0 systemd[1]: libpod-conmon-13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648.scope: Deactivated successfully.
Dec 04 10:16:58 compute-0 podman[99074]: 2025-12-04 10:16:58.605542278 +0000 UTC m=+0.059226782 container create 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:16:58 compute-0 systemd[1]: Started libpod-conmon-8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85.scope.
Dec 04 10:16:58 compute-0 podman[99074]: 2025-12-04 10:16:58.575531278 +0000 UTC m=+0.029215782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:16:58 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:16:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v120: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 8.4 KiB/s wr, 180 op/s
Dec 04 10:16:58 compute-0 podman[99074]: 2025-12-04 10:16:58.711890337 +0000 UTC m=+0.165574881 container init 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:16:58 compute-0 podman[99074]: 2025-12-04 10:16:58.722150707 +0000 UTC m=+0.175835191 container start 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec 04 10:16:58 compute-0 podman[99074]: 2025-12-04 10:16:58.727008715 +0000 UTC m=+0.180693219 container attach 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 04 10:16:58 compute-0 ceph-mon[75358]: 3.14 scrub starts
Dec 04 10:16:58 compute-0 ceph-mon[75358]: 3.14 scrub ok
Dec 04 10:16:58 compute-0 ceph-mon[75358]: 2.12 scrub starts
Dec 04 10:16:58 compute-0 ceph-mon[75358]: 2.12 scrub ok
Dec 04 10:16:59 compute-0 lvm[99169]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:16:59 compute-0 lvm[99169]: VG ceph_vg0 finished
Dec 04 10:16:59 compute-0 lvm[99170]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:16:59 compute-0 lvm[99170]: VG ceph_vg1 finished
Dec 04 10:16:59 compute-0 lvm[99172]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:16:59 compute-0 lvm[99172]: VG ceph_vg2 finished
Dec 04 10:16:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:16:59 compute-0 optimistic_wescoff[99091]: {}
Dec 04 10:16:59 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 04 10:16:59 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 04 10:16:59 compute-0 systemd[1]: libpod-8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85.scope: Deactivated successfully.
Dec 04 10:16:59 compute-0 systemd[1]: libpod-8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85.scope: Consumed 1.518s CPU time.
Dec 04 10:16:59 compute-0 podman[99175]: 2025-12-04 10:16:59.729123735 +0000 UTC m=+0.032734368 container died 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067-merged.mount: Deactivated successfully.
Dec 04 10:16:59 compute-0 podman[99175]: 2025-12-04 10:16:59.780349512 +0000 UTC m=+0.083960135 container remove 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:16:59 compute-0 systemd[1]: libpod-conmon-8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85.scope: Deactivated successfully.
Dec 04 10:16:59 compute-0 sudo[98996]: pam_unix(sudo:session): session closed for user root
Dec 04 10:16:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:16:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:16:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:59 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 04 10:16:59 compute-0 ceph-mon[75358]: pgmap v120: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 8.4 KiB/s wr, 180 op/s
Dec 04 10:16:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:16:59 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 04 10:16:59 compute-0 sudo[99190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:16:59 compute-0 sudo[99190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:16:59 compute-0 sudo[99190]: pam_unix(sudo:session): session closed for user root
Dec 04 10:17:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v121: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 7.4 KiB/s wr, 160 op/s
Dec 04 10:17:00 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 04 10:17:00 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 04 10:17:00 compute-0 ceph-mon[75358]: 3.13 scrub starts
Dec 04 10:17:00 compute-0 ceph-mon[75358]: 3.13 scrub ok
Dec 04 10:17:00 compute-0 ceph-mon[75358]: 6.16 scrub starts
Dec 04 10:17:00 compute-0 ceph-mon[75358]: 6.16 scrub ok
Dec 04 10:17:01 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 04 10:17:01 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 04 10:17:01 compute-0 ceph-mon[75358]: pgmap v121: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 7.4 KiB/s wr, 160 op/s
Dec 04 10:17:01 compute-0 ceph-mon[75358]: 2.10 scrub starts
Dec 04 10:17:01 compute-0 ceph-mon[75358]: 2.10 scrub ok
Dec 04 10:17:01 compute-0 ceph-mon[75358]: 6.10 scrub starts
Dec 04 10:17:01 compute-0 ceph-mon[75358]: 6.10 scrub ok
Dec 04 10:17:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 140 op/s
Dec 04 10:17:02 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec 04 10:17:02 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec 04 10:17:02 compute-0 ceph-mon[75358]: 6.12 scrub starts
Dec 04 10:17:02 compute-0 ceph-mon[75358]: 6.12 scrub ok
Dec 04 10:17:02 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 04 10:17:02 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 04 10:17:03 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 04 10:17:03 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 04 10:17:03 compute-0 ceph-mon[75358]: pgmap v122: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 140 op/s
Dec 04 10:17:03 compute-0 ceph-mon[75358]: 5.17 scrub starts
Dec 04 10:17:03 compute-0 ceph-mon[75358]: 5.17 scrub ok
Dec 04 10:17:03 compute-0 ceph-mon[75358]: 4.c scrub starts
Dec 04 10:17:03 compute-0 ceph-mon[75358]: 4.c scrub ok
Dec 04 10:17:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 5.4 KiB/s wr, 118 op/s
Dec 04 10:17:04 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 04 10:17:04 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 04 10:17:04 compute-0 ceph-mon[75358]: 4.0 scrub starts
Dec 04 10:17:04 compute-0 ceph-mon[75358]: 4.0 scrub ok
Dec 04 10:17:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 04 10:17:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 04 10:17:05 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 04 10:17:05 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 04 10:17:05 compute-0 ceph-mon[75358]: pgmap v123: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 5.4 KiB/s wr, 118 op/s
Dec 04 10:17:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 04 10:17:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 04 10:17:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 117 op/s
Dec 04 10:17:06 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 04 10:17:06 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 04 10:17:06 compute-0 ceph-mon[75358]: 7.17 scrub starts
Dec 04 10:17:06 compute-0 ceph-mon[75358]: 7.17 scrub ok
Dec 04 10:17:06 compute-0 ceph-mon[75358]: 5.8 scrub starts
Dec 04 10:17:06 compute-0 ceph-mon[75358]: 5.8 scrub ok
Dec 04 10:17:06 compute-0 ceph-mon[75358]: 2.e scrub starts
Dec 04 10:17:06 compute-0 ceph-mon[75358]: 2.e scrub ok
Dec 04 10:17:07 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 04 10:17:07 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 04 10:17:07 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 04 10:17:07 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 04 10:17:07 compute-0 ceph-mon[75358]: 7.16 scrub starts
Dec 04 10:17:07 compute-0 ceph-mon[75358]: 7.16 scrub ok
Dec 04 10:17:07 compute-0 ceph-mon[75358]: pgmap v124: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 117 op/s
Dec 04 10:17:07 compute-0 ceph-mon[75358]: 6.0 scrub starts
Dec 04 10:17:07 compute-0 ceph-mon[75358]: 6.0 scrub ok
Dec 04 10:17:08 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 04 10:17:08 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 04 10:17:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 117 op/s
Dec 04 10:17:08 compute-0 ceph-mon[75358]: 2.c scrub starts
Dec 04 10:17:08 compute-0 ceph-mon[75358]: 2.c scrub ok
Dec 04 10:17:08 compute-0 ceph-mon[75358]: 6.3 scrub starts
Dec 04 10:17:08 compute-0 ceph-mon[75358]: 6.3 scrub ok
Dec 04 10:17:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:09 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 04 10:17:09 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 04 10:17:09 compute-0 ceph-mon[75358]: pgmap v125: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 117 op/s
Dec 04 10:17:09 compute-0 ceph-mon[75358]: 5.b scrub starts
Dec 04 10:17:09 compute-0 ceph-mon[75358]: 5.b scrub ok
Dec 04 10:17:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 04 10:17:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 04 10:17:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:10 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 04 10:17:10 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 04 10:17:10 compute-0 ceph-mon[75358]: 5.a scrub starts
Dec 04 10:17:10 compute-0 ceph-mon[75358]: 5.a scrub ok
Dec 04 10:17:11 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 04 10:17:11 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 04 10:17:12 compute-0 ceph-mon[75358]: 3.10 scrub starts
Dec 04 10:17:12 compute-0 ceph-mon[75358]: 3.10 scrub ok
Dec 04 10:17:12 compute-0 ceph-mon[75358]: pgmap v126: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:12 compute-0 ceph-mon[75358]: 2.0 scrub starts
Dec 04 10:17:12 compute-0 ceph-mon[75358]: 2.0 scrub ok
Dec 04 10:17:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:12 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 04 10:17:12 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 04 10:17:13 compute-0 ceph-mon[75358]: 5.0 scrub starts
Dec 04 10:17:13 compute-0 ceph-mon[75358]: 5.0 scrub ok
Dec 04 10:17:13 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 04 10:17:13 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 04 10:17:14 compute-0 ceph-mon[75358]: pgmap v127: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:14 compute-0 ceph-mon[75358]: 2.1 scrub starts
Dec 04 10:17:14 compute-0 ceph-mon[75358]: 2.1 scrub ok
Dec 04 10:17:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:14 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 04 10:17:14 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 04 10:17:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:15 compute-0 ceph-mon[75358]: 4.3 scrub starts
Dec 04 10:17:15 compute-0 ceph-mon[75358]: 4.3 scrub ok
Dec 04 10:17:15 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 04 10:17:15 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 04 10:17:15 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 04 10:17:15 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 04 10:17:16 compute-0 ceph-mon[75358]: pgmap v128: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:16 compute-0 ceph-mon[75358]: 5.6 scrub starts
Dec 04 10:17:16 compute-0 ceph-mon[75358]: 5.6 scrub ok
Dec 04 10:17:16 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 04 10:17:16 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 04 10:17:16 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 04 10:17:16 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 04 10:17:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:17 compute-0 ceph-mon[75358]: 7.14 scrub starts
Dec 04 10:17:17 compute-0 ceph-mon[75358]: 7.14 scrub ok
Dec 04 10:17:17 compute-0 ceph-mon[75358]: 5.e scrub starts
Dec 04 10:17:17 compute-0 ceph-mon[75358]: 5.e scrub ok
Dec 04 10:17:17 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 04 10:17:17 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 04 10:17:18 compute-0 ceph-mon[75358]: 7.b scrub starts
Dec 04 10:17:18 compute-0 ceph-mon[75358]: 7.b scrub ok
Dec 04 10:17:18 compute-0 ceph-mon[75358]: pgmap v129: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:18 compute-0 ceph-mon[75358]: 5.d scrub starts
Dec 04 10:17:18 compute-0 ceph-mon[75358]: 5.d scrub ok
Dec 04 10:17:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:18 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 04 10:17:18 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 04 10:17:19 compute-0 ceph-mon[75358]: pgmap v130: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:19 compute-0 ceph-mon[75358]: 6.1b scrub starts
Dec 04 10:17:19 compute-0 ceph-mon[75358]: 6.1b scrub ok
Dec 04 10:17:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:21 compute-0 ceph-mon[75358]: pgmap v131: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:22 compute-0 sshd-session[99215]: Received disconnect from 103.149.86.230 port 41146:11: Bye Bye [preauth]
Dec 04 10:17:22 compute-0 sshd-session[99215]: Disconnected from authenticating user root 103.149.86.230 port 41146 [preauth]
Dec 04 10:17:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:23 compute-0 ceph-mon[75358]: pgmap v132: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 04 10:17:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 04 10:17:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:24 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 04 10:17:24 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 04 10:17:25 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 04 10:17:25 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 04 10:17:25 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 04 10:17:25 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 04 10:17:25 compute-0 ceph-mon[75358]: 3.d scrub starts
Dec 04 10:17:25 compute-0 ceph-mon[75358]: 3.d scrub ok
Dec 04 10:17:25 compute-0 ceph-mon[75358]: pgmap v133: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:25 compute-0 ceph-mon[75358]: 5.1b scrub starts
Dec 04 10:17:25 compute-0 ceph-mon[75358]: 5.1b scrub ok
Dec 04 10:17:26 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 04 10:17:26 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 04 10:17:26 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec 04 10:17:26 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec 04 10:17:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:17:26
Dec 04 10:17:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:17:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:17:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Dec 04 10:17:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:17:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:26 compute-0 ceph-mon[75358]: 3.2 scrub starts
Dec 04 10:17:26 compute-0 ceph-mon[75358]: 3.2 scrub ok
Dec 04 10:17:26 compute-0 ceph-mon[75358]: 4.19 scrub starts
Dec 04 10:17:26 compute-0 ceph-mon[75358]: 4.19 scrub ok
Dec 04 10:17:27 compute-0 ceph-mon[75358]: 7.10 scrub starts
Dec 04 10:17:27 compute-0 ceph-mon[75358]: 7.10 scrub ok
Dec 04 10:17:27 compute-0 ceph-mon[75358]: 6.18 scrub starts
Dec 04 10:17:27 compute-0 ceph-mon[75358]: 6.18 scrub ok
Dec 04 10:17:27 compute-0 ceph-mon[75358]: pgmap v134: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:17:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:17:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec 04 10:17:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec 04 10:17:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:28 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 04 10:17:28 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 04 10:17:28 compute-0 ceph-mon[75358]: 4.1c scrub starts
Dec 04 10:17:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:29 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 04 10:17:29 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 04 10:17:29 compute-0 ceph-mon[75358]: 3.0 scrub starts
Dec 04 10:17:29 compute-0 ceph-mon[75358]: 3.0 scrub ok
Dec 04 10:17:29 compute-0 ceph-mon[75358]: pgmap v135: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:29 compute-0 ceph-mon[75358]: 4.1c scrub ok
Dec 04 10:17:29 compute-0 ceph-mon[75358]: 3.18 scrub starts
Dec 04 10:17:29 compute-0 ceph-mon[75358]: 3.18 scrub ok
Dec 04 10:17:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 04 10:17:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 04 10:17:30 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 04 10:17:30 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 04 10:17:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:31 compute-0 ceph-mon[75358]: 7.0 scrub starts
Dec 04 10:17:31 compute-0 ceph-mon[75358]: 7.0 scrub ok
Dec 04 10:17:31 compute-0 ceph-mon[75358]: 6.7 scrub starts
Dec 04 10:17:31 compute-0 ceph-mon[75358]: 6.7 scrub ok
Dec 04 10:17:31 compute-0 ceph-mon[75358]: pgmap v136: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:32 compute-0 sshd-session[99217]: Received disconnect from 217.154.62.22 port 54410:11: Bye Bye [preauth]
Dec 04 10:17:32 compute-0 sshd-session[99217]: Disconnected from authenticating user root 217.154.62.22 port 54410 [preauth]
Dec 04 10:17:32 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 04 10:17:32 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.041313365636459e-06 of space, bias 4.0, pg target 0.001249576038763751 quantized to 16 (current 32)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:17:32 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec 04 10:17:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:17:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:17:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 04 10:17:33 compute-0 ceph-mon[75358]: 6.19 scrub starts
Dec 04 10:17:33 compute-0 ceph-mon[75358]: 6.19 scrub ok
Dec 04 10:17:33 compute-0 ceph-mon[75358]: pgmap v137: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:17:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:17:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 04 10:17:33 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 04 10:17:33 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev 1d71047a-4d95-4992-ae10-32ab2e31248c (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 04 10:17:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:17:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:17:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v139: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:17:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:17:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 04 10:17:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:17:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:17:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 04 10:17:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:17:34 compute-0 ceph-mon[75358]: osdmap e54: 3 total, 3 up, 3 in
Dec 04 10:17:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:17:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:17:34 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 04 10:17:34 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev b15e31a2-b6d9-4e61-b1ee-d435defc20a6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 04 10:17:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:17:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:17:35 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 04 10:17:35 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 04 10:17:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 04 10:17:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:17:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 04 10:17:35 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.0( v 46'6 (0'0,46'6] local-lis/les=45/46 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=13.839031219s) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 46'5 mlcod 46'5 active pruub 140.592666626s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:35 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev 093b82ae-26b3-44c2-a36c-c3612113336c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 04 10:17:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 04 10:17:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:17:35 compute-0 ceph-mon[75358]: pgmap v139: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:17:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:17:35 compute-0 ceph-mon[75358]: osdmap e55: 3 total, 3 up, 3 in
Dec 04 10:17:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:17:35 compute-0 ceph-mon[75358]: 6.1f scrub starts
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.0( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=13.839031219s) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 46'5 mlcod 0'0 unknown pruub 140.592666626s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.14( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.17( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.16( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.7( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1a( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1d( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.13( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.9( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.19( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.12( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.10( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1( v 46'6 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.3( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.c( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.8( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.a( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.b( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.5( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.18( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1c( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.6( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v142: 228 pgs: 31 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:17:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:17:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:17:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:17:36 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 04 10:17:36 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 04 10:17:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 04 10:17:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:17:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:17:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:17:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 04 10:17:36 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev b9da7383-50a7-406d-bd4b-413a4454a4a6 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 04 10:17:36 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 57 pg[10.0( v 53'18 (0'0,53'18] local-lis/les=49/50 n=9 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=8.880224228s) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 53'17 mlcod 53'17 active pruub 128.044174194s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev 1d71047a-4d95-4992-ae10-32ab2e31248c (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event 1d71047a-4d95-4992-ae10-32ab2e31248c (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev b15e31a2-b6d9-4e61-b1ee-d435defc20a6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event b15e31a2-b6d9-4e61-b1ee-d435defc20a6 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev 093b82ae-26b3-44c2-a36c-c3612113336c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event 093b82ae-26b3-44c2-a36c-c3612113336c (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev b9da7383-50a7-406d-bd4b-413a4454a4a6 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 04 10:17:36 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event b9da7383-50a7-406d-bd4b-413a4454a4a6 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec 04 10:17:36 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 57 pg[10.0( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=8.880224228s) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 53'17 mlcod 0'0 unknown pruub 128.044174194s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.0( v 53'483 (0'0,53'483] local-lis/les=47/48 n=210 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=14.854414940s) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 53'482 mlcod 53'482 active pruub 142.611846924s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-mon[75358]: 6.1f scrub ok
Dec 04 10:17:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:17:36 compute-0 ceph-mon[75358]: osdmap e56: 3 total, 3 up, 3 in
Dec 04 10:17:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 04 10:17:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:17:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:17:36 compute-0 ceph-mon[75358]: 6.13 scrub starts
Dec 04 10:17:36 compute-0 ceph-mon[75358]: 6.13 scrub ok
Dec 04 10:17:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 04 10:17:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:17:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:17:36 compute-0 ceph-mon[75358]: osdmap e57: 3 total, 3 up, 3 in
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.16( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.17( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.3( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.8( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.0( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 46'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.7( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.6( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.5( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.19( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.13( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.0( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=14.854414940s) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 53'482 mlcod 0'0 unknown pruub 142.611846924s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ecc00 space 0x559006b8f140 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007235180 space 0x559008917440 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007189d80 space 0x559008c40240 0x0~98 clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ec580 space 0x559008082840 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f9480 space 0x559006a5e240 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007729880 space 0x559008083740 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x55900723ac00 space 0x559008a09d40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ec900 space 0x55900731f740 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ed880 space 0x5590084eb140 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ede80 space 0x55900731ee40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007302b80 space 0x5590084ed440 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072d0f00 space 0x559008a19a40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071caf80 space 0x55900891d740 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ecf80 space 0x55900685c840 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072d0b00 space 0x559008c41740 0x0~98 clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ed180 space 0x559007376540 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072cbe80 space 0x559006a5eb40 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007214780 space 0x559008917740 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ec500 space 0x559008478b40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ecd80 space 0x559007353140 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007728300 space 0x559008916540 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072d0400 space 0x559007355140 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007234e00 space 0x559008a09440 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ec700 space 0x559008478240 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218b80 space 0x5590088c4e40 0x0~98 clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007234a00 space 0x559008a20240 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218000 space 0x559008916b40 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218f00 space 0x559008c3e240 0x0~98 clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559006c2b980 space 0x5590084edd40 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559006c2b880 space 0x55900892b140 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007234d00 space 0x5590088fcb40 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f8c00 space 0x559007355a40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007214500 space 0x5590088cd440 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x55900723a080 space 0x559008919d40 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f8580 space 0x5590084ea840 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218c80 space 0x559007377740 0x0~98 clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071eda80 space 0x5590084eba40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007184780 space 0x559008c3f440 0x0~98 clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007177380 space 0x559006b8eb40 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f9b00 space 0x559008082e40 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007728500 space 0x5590084ecb40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007234080 space 0x559008a20b40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007155880 space 0x559008082240 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007728880 space 0x559007649740 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072cae00 space 0x5590088ee840 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007184900 space 0x5590088ef440 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007215380 space 0x55900891c540 0x0~98 clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f9180 space 0x559007353d40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007155900 space 0x559008478840 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559006c2a080 space 0x559008479d40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f9300 space 0x5590088cda40 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559006c2a800 space 0x559008479440 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007184980 space 0x5590088e1140 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f8a00 space 0x5590084ec240 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218780 space 0x559006a5fa40 0x0~9a clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ed380 space 0x559007376e40 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072fac00 space 0x559007354840 0x0~6e clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218e80 space 0x559008919140 0x0~98 clean)
Dec 04 10:17:36 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007214580 space 0x559008916e40 0x0~6e clean)
Dec 04 10:17:37 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 04 10:17:37 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 04 10:17:37 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 04 10:17:37 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 04 10:17:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 04 10:17:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 04 10:17:37 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1b( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.b( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.d( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.a( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.13( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.12( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.11( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1e( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.10( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1f( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1d( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1c( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1a( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.19( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.18( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.7( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.6( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.4( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.8( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.f( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.5( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.9( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.e( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.c( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.2( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.3( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.14( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.15( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.15( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.16( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1b( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.17( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.17( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.14( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.16( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.2( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.c( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.d( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.9( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.a( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.b( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.10( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.12( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1f( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.11( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1d( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1c( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.19( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1a( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1e( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.13( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.18( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.6( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.7( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.f( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.4( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.8( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.e( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.a( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.6( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.9( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.e( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.4( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.8( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.0( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 53'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.5( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1a( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.18( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.2( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1e( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1c( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.12( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.13( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.10( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.14( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.0( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 53'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.c( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.15( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.3( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.17( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.16( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.14( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.a( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.4( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1a( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.2( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.18( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.12( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.10( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:37 compute-0 ceph-mon[75358]: pgmap v142: 228 pgs: 31 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:37 compute-0 ceph-mon[75358]: 7.1c scrub starts
Dec 04 10:17:37 compute-0 ceph-mon[75358]: 7.1c scrub ok
Dec 04 10:17:37 compute-0 ceph-mon[75358]: osdmap e58: 3 total, 3 up, 3 in
Dec 04 10:17:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 04 10:17:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 04 10:17:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v145: 290 pgs: 1 peering, 62 unknown, 227 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 04 10:17:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:17:38 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 04 10:17:38 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 04 10:17:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 04 10:17:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:17:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 04 10:17:38 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 04 10:17:38 compute-0 ceph-mon[75358]: 3.4 scrub starts
Dec 04 10:17:38 compute-0 ceph-mon[75358]: 3.4 scrub ok
Dec 04 10:17:38 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 04 10:17:38 compute-0 ceph-mon[75358]: 3.16 scrub starts
Dec 04 10:17:38 compute-0 ceph-mon[75358]: 3.16 scrub ok
Dec 04 10:17:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.693107605s) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active pruub 138.647598267s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.693107605s) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown pruub 138.647598267s@ mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:39 compute-0 ceph-mgr[75651]: [progress INFO root] Writing back 16 completed events
Dec 04 10:17:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 04 10:17:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:17:39 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 04 10:17:39 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 04 10:17:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 04 10:17:40 compute-0 ceph-mon[75358]: 3.b scrub starts
Dec 04 10:17:40 compute-0 ceph-mon[75358]: 3.b scrub ok
Dec 04 10:17:40 compute-0 ceph-mon[75358]: pgmap v145: 290 pgs: 1 peering, 62 unknown, 227 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 04 10:17:40 compute-0 ceph-mon[75358]: osdmap e59: 3 total, 3 up, 3 in
Dec 04 10:17:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:17:40 compute-0 ceph-mon[75358]: 4.6 scrub starts
Dec 04 10:17:40 compute-0 ceph-mon[75358]: 4.6 scrub ok
Dec 04 10:17:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 04 10:17:40 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.0( empty local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:40 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 04 10:17:40 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 04 10:17:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 1 peering, 93 unknown, 227 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:41 compute-0 ceph-mon[75358]: osdmap e60: 3 total, 3 up, 3 in
Dec 04 10:17:41 compute-0 ceph-mon[75358]: 4.b scrub starts
Dec 04 10:17:41 compute-0 ceph-mon[75358]: 4.b scrub ok
Dec 04 10:17:41 compute-0 ceph-mon[75358]: pgmap v148: 321 pgs: 1 peering, 93 unknown, 227 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:41 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 04 10:17:41 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 04 10:17:42 compute-0 ceph-mon[75358]: 4.11 scrub starts
Dec 04 10:17:42 compute-0 ceph-mon[75358]: 4.11 scrub ok
Dec 04 10:17:42 compute-0 sudo[99242]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wruzqphrxclaihzeezqonvlrfiviivqn ; /usr/bin/python3'
Dec 04 10:17:42 compute-0 sudo[99242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:17:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 04 10:17:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 04 10:17:42 compute-0 python3[99244]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:17:42 compute-0 podman[99245]: 2025-12-04 10:17:42.597985538 +0000 UTC m=+0.047414883 container create a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:17:42 compute-0 systemd[1]: Started libpod-conmon-a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7.scope.
Dec 04 10:17:42 compute-0 podman[99245]: 2025-12-04 10:17:42.579218732 +0000 UTC m=+0.028648107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:17:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a56afa357b603f8a629d2ba744504ca1a73d6456fdefb6f68557f73a37c766e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a56afa357b603f8a629d2ba744504ca1a73d6456fdefb6f68557f73a37c766e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:17:42 compute-0 podman[99245]: 2025-12-04 10:17:42.702161396 +0000 UTC m=+0.151590741 container init a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:17:42 compute-0 podman[99245]: 2025-12-04 10:17:42.709824972 +0000 UTC m=+0.159254317 container start a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:17:42 compute-0 podman[99245]: 2025-12-04 10:17:42.713549083 +0000 UTC m=+0.162978438 container attach a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:17:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:17:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:17:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:17:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:17:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 04 10:17:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 04 10:17:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:17:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:17:42 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 04 10:17:42 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 04 10:17:42 compute-0 eager_leavitt[99260]: could not fetch user info: no user info saved
Dec 04 10:17:42 compute-0 systemd[1]: libpod-a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7.scope: Deactivated successfully.
Dec 04 10:17:42 compute-0 podman[99245]: 2025-12-04 10:17:42.91819422 +0000 UTC m=+0.367623585 container died a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a56afa357b603f8a629d2ba744504ca1a73d6456fdefb6f68557f73a37c766e-merged.mount: Deactivated successfully.
Dec 04 10:17:42 compute-0 podman[99245]: 2025-12-04 10:17:42.958229303 +0000 UTC m=+0.407658638 container remove a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 04 10:17:42 compute-0 systemd[1]: libpod-conmon-a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7.scope: Deactivated successfully.
Dec 04 10:17:42 compute-0 sudo[99242]: pam_unix(sudo:session): session closed for user root
Dec 04 10:17:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 04 10:17:43 compute-0 ceph-mon[75358]: pgmap v149: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:17:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:17:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 04 10:17:43 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:17:43 compute-0 ceph-mon[75358]: 4.13 scrub starts
Dec 04 10:17:43 compute-0 ceph-mon[75358]: 4.13 scrub ok
Dec 04 10:17:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:17:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:17:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 04 10:17:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:17:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 04 10:17:43 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842870712s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.759933472s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842812538s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.759933472s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851659775s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.768844604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851603508s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.768844604s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842667580s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.759948730s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842653275s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.759948730s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.b( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851858139s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178298950s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972802162s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.890274048s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972784996s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.890274048s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851296425s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 active pruub 144.768844604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.974140167s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891738892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851257324s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 unknown NOTIFY pruub 144.768844604s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.974126816s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891738892s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.b( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851811409s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178298950s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854658127s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772491455s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848490715s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766357422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854634285s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772491455s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848474503s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766357422s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973777771s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891784668s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973707199s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891738892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973757744s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891784668s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973693848s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891738892s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848252296s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766403198s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.d( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851372719s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.178115845s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.d( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851303101s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.178115845s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.12( v 60'19 (0'0,60'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851553917s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 53'18 active pruub 136.178405762s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854074478s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772399902s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854035378s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772399902s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848249435s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766647339s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973383904s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891845703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848228455s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766647339s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848234177s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766403198s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973370552s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891845703s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973180771s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891738892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973235130s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891799927s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973223686s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891799927s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973237038s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891830444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847897530s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766525269s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973223686s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891830444s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973142624s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891738892s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847876549s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766525269s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847798347s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766540527s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853405952s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 active pruub 144.772171021s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847778320s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766540527s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853387833s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 unknown NOTIFY pruub 144.772171021s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972988129s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891815186s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972967148s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891815186s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853140831s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772018433s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.13( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851790428s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178665161s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.12( v 60'19 (0'0,60'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851522446s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 53'18 unknown NOTIFY pruub 136.178405762s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.13( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851760864s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178665161s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1e( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851792336s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178634644s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1e( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851577759s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178634644s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.11( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851357460s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178497314s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.11( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851336479s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178497314s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.10( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851150513s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178344727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.10( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851134300s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178344727s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1a( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851096153s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178604126s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1a( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851073265s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178604126s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.19( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851023674s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178588867s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.19( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851001740s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178588867s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.7( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851005554s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178741455s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.7( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850990295s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178741455s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853290558s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772216797s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972928047s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891876221s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972913742s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891876221s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853256226s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772216797s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853258133s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772262573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853247643s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772262573s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853122711s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772018433s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847470284s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766601562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847447395s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766601562s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972467422s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.892028809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.4( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850154877s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178802490s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.4( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850131989s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178802490s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.8( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850015640s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178833008s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.8( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849994659s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178833008s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.f( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849793434s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178771973s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.f( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849775314s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178771973s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.9( v 60'22 (0'0,60'22] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850012779s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.179077148s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.6( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849605560s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178710938s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.9( v 60'22 (0'0,60'22] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849981308s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.179077148s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972025871s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.892028809s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845895767s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766632080s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845875740s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766632080s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.6( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849592209s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178710938s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.e( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849787712s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.179092407s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849862099s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.179183960s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849843979s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.179183960s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.e( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849713326s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.179092407s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.2( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849770546s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.179244995s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.2( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849744797s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.179244995s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.14( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851269722s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.180892944s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.14( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851244926s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.180892944s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.15( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850983620s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.180740356s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.15( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850947380s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.180740356s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.16( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851053238s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.180862427s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.16( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850997925s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.180862427s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.15( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.17( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850710869s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.180831909s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.17( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850689888s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.180831909s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.15( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970975876s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891983032s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970958710s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891983032s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845645905s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766677856s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845627785s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766677856s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850890160s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772354126s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970510483s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891998291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.2( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850800514s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772354126s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.6( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845039368s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766784668s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.6( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845014572s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766784668s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.2( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850543022s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772384644s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970182419s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.892028809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970166206s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.892028809s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850509644s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772384644s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969996452s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891998291s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850395203s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772445679s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844781876s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766845703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973319054s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895385742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844763756s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766845703s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973299980s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895385742s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850362778s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772445679s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.d( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844604492s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766876221s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973101616s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895401001s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973128319s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895431519s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844582558s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766876221s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850190163s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772506714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973111153s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895431519s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973078728s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895401001s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850171089s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772506714s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844511986s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766952515s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844500542s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766952515s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972947121s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895416260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972927094s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895416260s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844467163s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767059326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849922180s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772506714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849894524s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772506714s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972844124s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895492554s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844441414s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767059326s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972822189s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895492554s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973083496s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895812988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973070145s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895812988s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849761009s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772552490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.9( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844330788s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767135620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849743843s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772552490s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844311714s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767135620s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844038963s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767089844s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972417831s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895492554s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844010353s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767089844s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972396851s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895492554s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972160339s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895507812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972143173s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895507812s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972132683s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895599365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972115517s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895599365s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854270935s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 active pruub 144.777877808s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.8( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.3( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.843025208s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767227173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854243279s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 unknown NOTIFY pruub 144.777877808s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842938423s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767227173s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853326797s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 active pruub 144.777832031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853282928s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 unknown NOTIFY pruub 144.777832031s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970718384s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895523071s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842124939s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767242432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842099190s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767242432s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.11( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.841900826s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767257690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.841798782s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767257690s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.10( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.18( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970474243s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895523071s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.4( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.9( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.1c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.11( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.11( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.1( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.6( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.6( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.1d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.10( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.1a( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:43 compute-0 sudo[99380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dddivnicorfmfqnysyldztsnopzomxsd ; /usr/bin/python3'
Dec 04 10:17:43 compute-0 sudo[99380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:17:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 04 10:17:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 04 10:17:43 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 04 10:17:43 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 04 10:17:43 compute-0 python3[99382]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:17:43 compute-0 podman[99383]: 2025-12-04 10:17:43.925323931 +0000 UTC m=+0.053841039 container create 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:17:43 compute-0 systemd[1]: Started libpod-conmon-84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478.scope.
Dec 04 10:17:43 compute-0 podman[99383]: 2025-12-04 10:17:43.897235558 +0000 UTC m=+0.025752756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 04 10:17:43 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5e1438c2945060fa43fafabbc584189e4856eeb86b8261c2288b06012d798d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5e1438c2945060fa43fafabbc584189e4856eeb86b8261c2288b06012d798d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:17:44 compute-0 podman[99383]: 2025-12-04 10:17:44.012838825 +0000 UTC m=+0.141356023 container init 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:17:44 compute-0 podman[99383]: 2025-12-04 10:17:44.021831103 +0000 UTC m=+0.150348251 container start 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:17:44 compute-0 podman[99383]: 2025-12-04 10:17:44.026069255 +0000 UTC m=+0.154586403 container attach 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 04 10:17:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 04 10:17:44 compute-0 ceph-mon[75358]: 7.7 scrub starts
Dec 04 10:17:44 compute-0 ceph-mon[75358]: 7.7 scrub ok
Dec 04 10:17:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:17:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:17:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 04 10:17:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:17:44 compute-0 ceph-mon[75358]: osdmap e61: 3 total, 3 up, 3 in
Dec 04 10:17:44 compute-0 ceph-mon[75358]: 7.15 scrub starts
Dec 04 10:17:44 compute-0 ceph-mon[75358]: 7.15 scrub ok
Dec 04 10:17:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 04 10:17:44 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.14( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.12( v 60'19 lc 53'17 (0'0,60'19] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=60'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.f( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.b( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.2( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.16( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.6( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.19( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.1a( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=61/62 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.2( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.b( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1f( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.9( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.11( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.15( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.11( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=61/62 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.18( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.d( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.1e( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.6( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.e( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.d( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.7( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.6( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/62 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.15( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.4( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.11( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.13( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.10( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.9( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.10( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.8( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.17( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:44 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 04 10:17:44 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 04 10:17:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 04 10:17:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 04 10:17:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 04 10:17:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 04 10:17:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 04 10:17:45 compute-0 ceph-mon[75358]: 7.d scrub starts
Dec 04 10:17:45 compute-0 ceph-mon[75358]: 7.d scrub ok
Dec 04 10:17:45 compute-0 ceph-mon[75358]: osdmap e62: 3 total, 3 up, 3 in
Dec 04 10:17:45 compute-0 ceph-mon[75358]: 7.11 scrub starts
Dec 04 10:17:45 compute-0 ceph-mon[75358]: 7.11 scrub ok
Dec 04 10:17:45 compute-0 ceph-mon[75358]: pgmap v152: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 04 10:17:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 04 10:17:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 04 10:17:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]: {
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "user_id": "openstack",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "display_name": "openstack",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "email": "",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "suspended": 0,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "max_buckets": 1000,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "subusers": [],
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "keys": [
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         {
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:             "user": "openstack",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:             "access_key": "MV558CNQ0495KIP242HY",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:             "secret_key": "3t3T6cs6kVOAPfJDg1f7fdophmZLswl1bIUyAmXg",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:             "active": true,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:             "create_date": "2025-12-04T10:17:45.801730Z"
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         }
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     ],
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "swift_keys": [],
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "caps": [],
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "op_mask": "read, write, delete",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "default_placement": "",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "default_storage_class": "",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "placement_tags": [],
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "bucket_quota": {
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "enabled": false,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "check_on_raw": false,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "max_size": -1,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "max_size_kb": 0,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "max_objects": -1
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     },
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "user_quota": {
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "enabled": false,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "check_on_raw": false,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "max_size": -1,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "max_size_kb": 0,
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:         "max_objects": -1
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     },
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "temp_url_keys": [],
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "type": "rgw",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "mfa_ids": [],
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "account_id": "",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "path": "/",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "create_date": "2025-12-04T10:17:45.800970Z",
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "tags": [],
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]:     "group_ids": []
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]: }
Dec 04 10:17:45 compute-0 lucid_goldstine[99399]: 
Dec 04 10:17:45 compute-0 systemd[1]: libpod-84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478.scope: Deactivated successfully.
Dec 04 10:17:45 compute-0 podman[99383]: 2025-12-04 10:17:45.849309618 +0000 UTC m=+1.977826726 container died 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 04 10:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5e1438c2945060fa43fafabbc584189e4856eeb86b8261c2288b06012d798d-merged.mount: Deactivated successfully.
Dec 04 10:17:45 compute-0 podman[99383]: 2025-12-04 10:17:45.909087928 +0000 UTC m=+2.037605026 container remove 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:17:45 compute-0 systemd[1]: libpod-conmon-84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478.scope: Deactivated successfully.
Dec 04 10:17:45 compute-0 sudo[99380]: pam_unix(sudo:session): session closed for user root
Dec 04 10:17:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 04 10:17:46 compute-0 ceph-mon[75358]: 7.19 scrub starts
Dec 04 10:17:46 compute-0 ceph-mon[75358]: 7.19 scrub ok
Dec 04 10:17:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 04 10:17:46 compute-0 ceph-mon[75358]: osdmap e63: 3 total, 3 up, 3 in
Dec 04 10:17:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 04 10:17:46 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.502585411s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=63'486 lcod 63'486 active pruub 152.477401733s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.502213478s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 152.477401733s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506839752s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=60'484 lcod 60'484 active pruub 152.482147217s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506290436s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 active pruub 152.481658936s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506756783s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 152.482147217s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506227493s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.481658936s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506434441s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 active pruub 152.481796265s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.501778603s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 active pruub 152.477355957s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.501735687s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.477355957s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506053925s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.481796265s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.505730629s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 active pruub 152.482131958s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.505750656s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=60'484 lcod 60'484 active pruub 152.482177734s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.505625725s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.482131958s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.505614281s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 152.482177734s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=60'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=60'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=60'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=60'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:46 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:46 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 04 10:17:46 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 04 10:17:46 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 04 10:17:46 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 04 10:17:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 264 B/s, 0 objects/s recovering
Dec 04 10:17:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 04 10:17:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 04 10:17:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 04 10:17:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 04 10:17:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 04 10:17:47 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.507721901s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 active pruub 152.482498169s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.507640839s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.482498169s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506669044s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=60'484 lcod 60'484 active pruub 152.481643677s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506562233s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 152.481643677s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.511236191s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 active pruub 152.486373901s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.511168480s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.486373901s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=60'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506362915s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 active pruub 152.482223511s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=60'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506306648s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 active pruub 152.482238770s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505764008s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'487 active pruub 152.481842041s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506172180s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 152.482238770s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505682945s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'487 unknown NOTIFY pruub 152.481842041s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.509919167s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 active pruub 152.486312866s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.509859085s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 152.486312866s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505632401s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 152.482223511s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505712509s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 active pruub 152.482543945s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505644798s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.482543945s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:47 compute-0 ceph-mon[75358]: osdmap e64: 3 total, 3 up, 3 in
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=60'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-mon[75358]: 6.14 scrub starts
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:47 compute-0 ceph-mon[75358]: 6.9 scrub starts
Dec 04 10:17:47 compute-0 ceph-mon[75358]: 6.9 scrub ok
Dec 04 10:17:47 compute-0 ceph-mon[75358]: 6.14 scrub ok
Dec 04 10:17:47 compute-0 ceph-mon[75358]: pgmap v155: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 264 B/s, 0 objects/s recovering
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505240440s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=60'484 lcod 60'484 active pruub 152.482498169s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 04 10:17:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505127907s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 152.482498169s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=60'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=64/65 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:47 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 04 10:17:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 04 10:17:48 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 04 10:17:48 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 04 10:17:48 compute-0 ceph-mon[75358]: osdmap e65: 3 total, 3 up, 3 in
Dec 04 10:17:48 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=65/66 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=65/66 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'488 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:17:48 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 04 10:17:48 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 04 10:17:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 9 peering, 312 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 511 B/s wr, 64 op/s; 1.4 KiB/s, 27 objects/s recovering
Dec 04 10:17:49 compute-0 ceph-mon[75358]: osdmap e66: 3 total, 3 up, 3 in
Dec 04 10:17:49 compute-0 ceph-mon[75358]: 6.5 scrub starts
Dec 04 10:17:49 compute-0 ceph-mon[75358]: 6.5 scrub ok
Dec 04 10:17:49 compute-0 ceph-mon[75358]: pgmap v158: 321 pgs: 9 peering, 312 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 511 B/s wr, 64 op/s; 1.4 KiB/s, 27 objects/s recovering
Dec 04 10:17:49 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 04 10:17:49 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 04 10:17:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:50 compute-0 ceph-mon[75358]: 6.a scrub starts
Dec 04 10:17:50 compute-0 ceph-mon[75358]: 6.a scrub ok
Dec 04 10:17:50 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 04 10:17:50 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 04 10:17:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 9 peering, 312 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 365 B/s wr, 45 op/s; 1.0 KiB/s, 20 objects/s recovering
Dec 04 10:17:51 compute-0 ceph-mon[75358]: pgmap v159: 321 pgs: 9 peering, 312 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 365 B/s wr, 45 op/s; 1.0 KiB/s, 20 objects/s recovering
Dec 04 10:17:52 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 04 10:17:52 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 04 10:17:52 compute-0 ceph-mon[75358]: 5.19 scrub starts
Dec 04 10:17:52 compute-0 ceph-mon[75358]: 5.19 scrub ok
Dec 04 10:17:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 38 op/s; 731 B/s, 16 objects/s recovering
Dec 04 10:17:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 04 10:17:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 04 10:17:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 04 10:17:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 04 10:17:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 04 10:17:53 compute-0 ceph-mon[75358]: 4.1d scrub starts
Dec 04 10:17:53 compute-0 ceph-mon[75358]: 4.1d scrub ok
Dec 04 10:17:53 compute-0 ceph-mon[75358]: pgmap v160: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 38 op/s; 731 B/s, 16 objects/s recovering
Dec 04 10:17:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 04 10:17:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 04 10:17:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 04 10:17:53 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 04 10:17:54 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 04 10:17:54 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 04 10:17:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:17:54 compute-0 ceph-mon[75358]: 5.18 scrub starts
Dec 04 10:17:54 compute-0 ceph-mon[75358]: 5.18 scrub ok
Dec 04 10:17:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 04 10:17:54 compute-0 ceph-mon[75358]: osdmap e67: 3 total, 3 up, 3 in
Dec 04 10:17:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 33 op/s; 635 B/s, 14 objects/s recovering
Dec 04 10:17:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 04 10:17:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 04 10:17:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 04 10:17:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 04 10:17:55 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 04 10:17:55 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 04 10:17:55 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 04 10:17:55 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 04 10:17:56 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 04 10:17:56 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 04 10:17:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:57 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 04 10:17:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:17:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:17:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:17:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:17:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:17:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:17:58 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 04 10:17:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:17:59 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 04 10:17:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 04 10:17:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 04 10:17:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 04 10:17:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 04 10:17:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 04 10:17:59 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 04 10:17:59 compute-0 ceph-mon[75358]: 5.1a scrub starts
Dec 04 10:17:59 compute-0 ceph-mon[75358]: 5.1a scrub ok
Dec 04 10:17:59 compute-0 ceph-mon[75358]: pgmap v162: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 33 op/s; 635 B/s, 14 objects/s recovering
Dec 04 10:17:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 04 10:17:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 04 10:17:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 04 10:17:59 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 04 10:17:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:00 compute-0 sudo[99498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:18:00 compute-0 sudo[99498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:18:00 compute-0 sudo[99498]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:00 compute-0 sudo[99523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:18:00 compute-0 sudo[99523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:18:00 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 04 10:18:00 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 5.f scrub starts
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 5.f scrub ok
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 4.1e scrub starts
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 4.1e scrub ok
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 3.11 scrub starts
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 3.11 scrub ok
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 2.6 scrub starts
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 2.6 scrub ok
Dec 04 10:18:00 compute-0 ceph-mon[75358]: pgmap v163: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 7.a scrub starts
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 5.1 scrub starts
Dec 04 10:18:00 compute-0 ceph-mon[75358]: pgmap v164: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 7.a scrub ok
Dec 04 10:18:00 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 04 10:18:00 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 04 10:18:00 compute-0 ceph-mon[75358]: 5.1 scrub ok
Dec 04 10:18:00 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 04 10:18:00 compute-0 ceph-mon[75358]: osdmap e68: 3 total, 3 up, 3 in
Dec 04 10:18:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 04 10:18:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 04 10:18:00 compute-0 sudo[99523]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:18:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:18:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:18:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:18:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:18:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:18:00 compute-0 sudo[99580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:18:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 04 10:18:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 04 10:18:00 compute-0 sudo[99580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:18:00 compute-0 sudo[99580]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:00 compute-0 sudo[99605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:18:00 compute-0 sudo[99605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:18:01 compute-0 sshd-session[99630]: Accepted publickey for zuul from 192.168.122.30 port 59188 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:18:01 compute-0 systemd-logind[798]: New session 34 of user zuul.
Dec 04 10:18:01 compute-0 systemd[1]: Started Session 34 of User zuul.
Dec 04 10:18:01 compute-0 sshd-session[99630]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:18:01 compute-0 podman[99645]: 2025-12-04 10:18:01.114176602 +0000 UTC m=+0.041491958 container create d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 04 10:18:01 compute-0 systemd[1]: Started libpod-conmon-d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169.scope.
Dec 04 10:18:01 compute-0 podman[99645]: 2025-12-04 10:18:01.095744205 +0000 UTC m=+0.023059581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:18:01 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:18:01 compute-0 podman[99645]: 2025-12-04 10:18:01.247450758 +0000 UTC m=+0.174766134 container init d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:18:01 compute-0 podman[99645]: 2025-12-04 10:18:01.255238497 +0000 UTC m=+0.182553853 container start d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 04 10:18:01 compute-0 podman[99645]: 2025-12-04 10:18:01.258569567 +0000 UTC m=+0.185884924 container attach d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 04 10:18:01 compute-0 flamboyant_euclid[99690]: 167 167
Dec 04 10:18:01 compute-0 systemd[1]: libpod-d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169.scope: Deactivated successfully.
Dec 04 10:18:01 compute-0 podman[99645]: 2025-12-04 10:18:01.262030582 +0000 UTC m=+0.189345938 container died d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a7b3a43540bdb13484c37b1f663db10d1fca72385bbd477cc2332c86f70aaec-merged.mount: Deactivated successfully.
Dec 04 10:18:01 compute-0 podman[99645]: 2025-12-04 10:18:01.298261582 +0000 UTC m=+0.225576928 container remove d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:18:01 compute-0 systemd[1]: libpod-conmon-d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169.scope: Deactivated successfully.
Dec 04 10:18:01 compute-0 podman[99738]: 2025-12-04 10:18:01.436084347 +0000 UTC m=+0.039803807 container create ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:18:01 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 04 10:18:01 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 04 10:18:01 compute-0 systemd[1]: Started libpod-conmon-ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33.scope.
Dec 04 10:18:01 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:01 compute-0 podman[99738]: 2025-12-04 10:18:01.418838039 +0000 UTC m=+0.022557499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:18:01 compute-0 podman[99738]: 2025-12-04 10:18:01.523094849 +0000 UTC m=+0.126814319 container init ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:18:01 compute-0 podman[99738]: 2025-12-04 10:18:01.531164685 +0000 UTC m=+0.134884185 container start ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:18:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 04 10:18:01 compute-0 podman[99738]: 2025-12-04 10:18:01.535874009 +0000 UTC m=+0.139593479 container attach ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:18:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 04 10:18:01 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 04 10:18:01 compute-0 ceph-mon[75358]: 7.8 scrub starts
Dec 04 10:18:01 compute-0 ceph-mon[75358]: 7.8 scrub ok
Dec 04 10:18:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 04 10:18:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 04 10:18:01 compute-0 ceph-mon[75358]: osdmap e69: 3 total, 3 up, 3 in
Dec 04 10:18:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:18:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:18:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:18:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:18:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:18:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:18:01 compute-0 ceph-mon[75358]: pgmap v167: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 04 10:18:01 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 04 10:18:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 04 10:18:01 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 04 10:18:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 04 10:18:01 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 04 10:18:01 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.265105247s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 active pruub 160.769821167s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:01 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.264783859s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 160.769821167s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:01 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.267188072s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=63'488 lcod 63'488 active pruub 160.772674561s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:01 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.267125130s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=63'488 lcod 63'488 unknown NOTIFY pruub 160.772674561s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:01 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.267120361s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 active pruub 160.772811890s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:01 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.266900063s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 160.772811890s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:01 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.271852493s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=60'484 lcod 60'484 active pruub 160.778198242s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:01 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.271541595s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 160.778198242s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:01 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:01 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:01 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:01 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:02 compute-0 zen_haslett[99755]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:18:02 compute-0 zen_haslett[99755]: --> All data devices are unavailable
Dec 04 10:18:02 compute-0 systemd[1]: libpod-ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33.scope: Deactivated successfully.
Dec 04 10:18:02 compute-0 python3.9[99859]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:18:02 compute-0 podman[99873]: 2025-12-04 10:18:02.140418386 +0000 UTC m=+0.037454641 container died ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784-merged.mount: Deactivated successfully.
Dec 04 10:18:02 compute-0 podman[99873]: 2025-12-04 10:18:02.260166223 +0000 UTC m=+0.157202458 container remove ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:18:02 compute-0 systemd[1]: libpod-conmon-ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33.scope: Deactivated successfully.
Dec 04 10:18:02 compute-0 sudo[99605]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:02 compute-0 sudo[99905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:18:02 compute-0 sudo[99905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:18:02 compute-0 sudo[99905]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:02 compute-0 sudo[99930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:18:02 compute-0 sudo[99930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:18:02 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 04 10:18:02 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 04 10:18:02 compute-0 ceph-mon[75358]: 7.5 scrub starts
Dec 04 10:18:02 compute-0 ceph-mon[75358]: 7.5 scrub ok
Dec 04 10:18:02 compute-0 ceph-mon[75358]: 2.4 scrub starts
Dec 04 10:18:02 compute-0 ceph-mon[75358]: 2.4 scrub ok
Dec 04 10:18:02 compute-0 ceph-mon[75358]: 4.1f scrub starts
Dec 04 10:18:02 compute-0 ceph-mon[75358]: 4.1f scrub ok
Dec 04 10:18:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 04 10:18:02 compute-0 ceph-mon[75358]: osdmap e70: 3 total, 3 up, 3 in
Dec 04 10:18:02 compute-0 podman[100007]: 2025-12-04 10:18:02.688122092 +0000 UTC m=+0.038233239 container create bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:18:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 04 10:18:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 04 10:18:02 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 04 10:18:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=63'488 lcod 63'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=63'488 lcod 63'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:02 compute-0 systemd[1]: Started libpod-conmon-bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3.scope.
Dec 04 10:18:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 04 10:18:02 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 04 10:18:02 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:18:02 compute-0 podman[100007]: 2025-12-04 10:18:02.671816087 +0000 UTC m=+0.021927254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:18:02 compute-0 podman[100007]: 2025-12-04 10:18:02.780251619 +0000 UTC m=+0.130362816 container init bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:18:02 compute-0 podman[100007]: 2025-12-04 10:18:02.790362094 +0000 UTC m=+0.140473241 container start bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:18:02 compute-0 podman[100007]: 2025-12-04 10:18:02.794523735 +0000 UTC m=+0.144634932 container attach bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:18:02 compute-0 interesting_hellman[100031]: 167 167
Dec 04 10:18:02 compute-0 systemd[1]: libpod-bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3.scope: Deactivated successfully.
Dec 04 10:18:02 compute-0 podman[100007]: 2025-12-04 10:18:02.798354288 +0000 UTC m=+0.148465445 container died bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-31d4a3e0ae8a4b8825eb00c280d98fe893f7f8784d282261429939d026031113-merged.mount: Deactivated successfully.
Dec 04 10:18:02 compute-0 podman[100007]: 2025-12-04 10:18:02.836827242 +0000 UTC m=+0.186938389 container remove bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:18:02 compute-0 systemd[1]: libpod-conmon-bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3.scope: Deactivated successfully.
Dec 04 10:18:03 compute-0 podman[100078]: 2025-12-04 10:18:03.037666638 +0000 UTC m=+0.050718182 container create 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:18:03 compute-0 systemd[1]: Started libpod-conmon-3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a.scope.
Dec 04 10:18:03 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:18:03 compute-0 podman[100078]: 2025-12-04 10:18:03.010726024 +0000 UTC m=+0.023777648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:03 compute-0 podman[100078]: 2025-12-04 10:18:03.120250383 +0000 UTC m=+0.133301947 container init 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:18:03 compute-0 podman[100078]: 2025-12-04 10:18:03.127305284 +0000 UTC m=+0.140356828 container start 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 04 10:18:03 compute-0 podman[100078]: 2025-12-04 10:18:03.13167039 +0000 UTC m=+0.144721954 container attach 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:18:03 compute-0 great_murdock[100130]: {
Dec 04 10:18:03 compute-0 great_murdock[100130]:     "0": [
Dec 04 10:18:03 compute-0 great_murdock[100130]:         {
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "devices": [
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "/dev/loop3"
Dec 04 10:18:03 compute-0 great_murdock[100130]:             ],
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_name": "ceph_lv0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_size": "21470642176",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "name": "ceph_lv0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "tags": {
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.cluster_name": "ceph",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.crush_device_class": "",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.encrypted": "0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.objectstore": "bluestore",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.osd_id": "0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.type": "block",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.vdo": "0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.with_tpm": "0"
Dec 04 10:18:03 compute-0 great_murdock[100130]:             },
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "type": "block",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "vg_name": "ceph_vg0"
Dec 04 10:18:03 compute-0 great_murdock[100130]:         }
Dec 04 10:18:03 compute-0 great_murdock[100130]:     ],
Dec 04 10:18:03 compute-0 great_murdock[100130]:     "1": [
Dec 04 10:18:03 compute-0 great_murdock[100130]:         {
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "devices": [
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "/dev/loop4"
Dec 04 10:18:03 compute-0 great_murdock[100130]:             ],
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_name": "ceph_lv1",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_size": "21470642176",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "name": "ceph_lv1",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "tags": {
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.cluster_name": "ceph",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.crush_device_class": "",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.encrypted": "0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.objectstore": "bluestore",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.osd_id": "1",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.type": "block",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.vdo": "0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.with_tpm": "0"
Dec 04 10:18:03 compute-0 great_murdock[100130]:             },
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "type": "block",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "vg_name": "ceph_vg1"
Dec 04 10:18:03 compute-0 great_murdock[100130]:         }
Dec 04 10:18:03 compute-0 great_murdock[100130]:     ],
Dec 04 10:18:03 compute-0 great_murdock[100130]:     "2": [
Dec 04 10:18:03 compute-0 great_murdock[100130]:         {
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "devices": [
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "/dev/loop5"
Dec 04 10:18:03 compute-0 great_murdock[100130]:             ],
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_name": "ceph_lv2",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_size": "21470642176",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "name": "ceph_lv2",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "tags": {
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.cluster_name": "ceph",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.crush_device_class": "",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.encrypted": "0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.objectstore": "bluestore",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.osd_id": "2",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.type": "block",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.vdo": "0",
Dec 04 10:18:03 compute-0 great_murdock[100130]:                 "ceph.with_tpm": "0"
Dec 04 10:18:03 compute-0 great_murdock[100130]:             },
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "type": "block",
Dec 04 10:18:03 compute-0 great_murdock[100130]:             "vg_name": "ceph_vg2"
Dec 04 10:18:03 compute-0 great_murdock[100130]:         }
Dec 04 10:18:03 compute-0 great_murdock[100130]:     ]
Dec 04 10:18:03 compute-0 great_murdock[100130]: }
Dec 04 10:18:03 compute-0 systemd[1]: libpod-3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a.scope: Deactivated successfully.
Dec 04 10:18:03 compute-0 podman[100078]: 2025-12-04 10:18:03.502856691 +0000 UTC m=+0.515908235 container died 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 04 10:18:03 compute-0 sudo[100229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdnrngivdtfetezmrhpzwdnkmkedysfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843483.048099-32-145266360230481/AnsiballZ_command.py'
Dec 04 10:18:03 compute-0 sudo[100229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721-merged.mount: Deactivated successfully.
Dec 04 10:18:03 compute-0 podman[100078]: 2025-12-04 10:18:03.55513935 +0000 UTC m=+0.568190904 container remove 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:18:03 compute-0 systemd[1]: libpod-conmon-3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a.scope: Deactivated successfully.
Dec 04 10:18:03 compute-0 sudo[99930]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:03 compute-0 sudo[100243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:18:03 compute-0 sudo[100243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:18:03 compute-0 sudo[100243]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 04 10:18:03 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 04 10:18:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 04 10:18:03 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 04 10:18:03 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474749565s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=53'483 active pruub 169.307846069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:03 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474720001s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=53'483 unknown NOTIFY pruub 169.307846069s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:03 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=15.456710815s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=63'485 active pruub 176.290176392s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:03 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=15.456697464s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=63'485 unknown NOTIFY pruub 176.290176392s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:03 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474306107s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=63'485 active pruub 169.308151245s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:03 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474275589s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=63'487 active pruub 169.308166504s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:03 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474267006s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=63'485 unknown NOTIFY pruub 169.308151245s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:03 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474235535s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=63'487 unknown NOTIFY pruub 169.308166504s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:03 compute-0 ceph-mon[75358]: 6.8 scrub starts
Dec 04 10:18:03 compute-0 ceph-mon[75358]: 6.8 scrub ok
Dec 04 10:18:03 compute-0 ceph-mon[75358]: osdmap e71: 3 total, 3 up, 3 in
Dec 04 10:18:03 compute-0 ceph-mon[75358]: pgmap v170: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 04 10:18:03 compute-0 python3.9[100233]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:18:03 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:03 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=72) [2] r=0 lpr=72 pi=[64,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:03 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:03 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:03 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:03 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=71/72 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:03 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=71/72 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=63'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:03 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=71/72 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=63'489 lcod 63'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:03 compute-0 sudo[100268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:18:03 compute-0 sudo[100268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:18:04 compute-0 podman[100312]: 2025-12-04 10:18:04.065606773 +0000 UTC m=+0.060329616 container create d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:18:04 compute-0 systemd[1]: Started libpod-conmon-d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce.scope.
Dec 04 10:18:04 compute-0 podman[100312]: 2025-12-04 10:18:04.037677425 +0000 UTC m=+0.032400328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:18:04 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:18:04 compute-0 podman[100312]: 2025-12-04 10:18:04.163901529 +0000 UTC m=+0.158624362 container init d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:18:04 compute-0 podman[100312]: 2025-12-04 10:18:04.171932874 +0000 UTC m=+0.166655687 container start d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:18:04 compute-0 podman[100312]: 2025-12-04 10:18:04.175772037 +0000 UTC m=+0.170494850 container attach d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:18:04 compute-0 inspiring_saha[100328]: 167 167
Dec 04 10:18:04 compute-0 systemd[1]: libpod-d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce.scope: Deactivated successfully.
Dec 04 10:18:04 compute-0 conmon[100328]: conmon d8fda11346c3042b09c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce.scope/container/memory.events
Dec 04 10:18:04 compute-0 podman[100312]: 2025-12-04 10:18:04.180630615 +0000 UTC m=+0.175353438 container died d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 04 10:18:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-60ca272c40489e61977fde74399e58b7bc65ba674e765739c19bb169495aacad-merged.mount: Deactivated successfully.
Dec 04 10:18:04 compute-0 podman[100312]: 2025-12-04 10:18:04.227973714 +0000 UTC m=+0.222696527 container remove d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 04 10:18:04 compute-0 systemd[1]: libpod-conmon-d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce.scope: Deactivated successfully.
Dec 04 10:18:04 compute-0 podman[100355]: 2025-12-04 10:18:04.412235048 +0000 UTC m=+0.045502166 container create 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:18:04 compute-0 systemd[1]: Started libpod-conmon-95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389.scope.
Dec 04 10:18:04 compute-0 podman[100355]: 2025-12-04 10:18:04.393451362 +0000 UTC m=+0.026718500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:18:04 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:18:04 compute-0 podman[100355]: 2025-12-04 10:18:04.525497017 +0000 UTC m=+0.158764155 container init 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:18:04 compute-0 podman[100355]: 2025-12-04 10:18:04.538170304 +0000 UTC m=+0.171437422 container start 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:18:04 compute-0 podman[100355]: 2025-12-04 10:18:04.54621135 +0000 UTC m=+0.179478488 container attach 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:18:04 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 04 10:18:04 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 04 10:18:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 04 10:18:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 04 10:18:04 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=0/0 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 pct=0'0 crt=63'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=0/0 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=63'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.129341125s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=53'483 lcod 0'0 active pruub 170.542831421s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.129242897s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 170.542831421s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=71/72 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.131555557s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=63'489 lcod 63'488 active pruub 170.545547485s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=71/72 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.131464005s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=63'489 lcod 63'488 unknown NOTIFY pruub 170.545547485s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.131343842s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=63'485 lcod 60'484 active pruub 170.545532227s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.131286621s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=63'485 lcod 60'484 unknown NOTIFY pruub 170.545532227s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=71/72 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.130816460s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=53'483 lcod 0'0 active pruub 170.545532227s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=71/72 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.130729675s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 170.545532227s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:04 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=63'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=63'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=63'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=63'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:04 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=63'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:04 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=63'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:04 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:04 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:04 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 04 10:18:04 compute-0 ceph-mon[75358]: osdmap e72: 3 total, 3 up, 3 in
Dec 04 10:18:04 compute-0 ceph-mon[75358]: osdmap e73: 3 total, 3 up, 3 in
Dec 04 10:18:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 04 10:18:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 04 10:18:05 compute-0 lvm[100451]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:18:05 compute-0 lvm[100451]: VG ceph_vg1 finished
Dec 04 10:18:05 compute-0 lvm[100450]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:18:05 compute-0 lvm[100450]: VG ceph_vg0 finished
Dec 04 10:18:05 compute-0 lvm[100453]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:18:05 compute-0 lvm[100453]: VG ceph_vg2 finished
Dec 04 10:18:05 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 04 10:18:05 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 04 10:18:05 compute-0 admiring_goldstine[100372]: {}
Dec 04 10:18:05 compute-0 systemd[1]: libpod-95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389.scope: Deactivated successfully.
Dec 04 10:18:05 compute-0 systemd[1]: libpod-95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389.scope: Consumed 1.482s CPU time.
Dec 04 10:18:05 compute-0 podman[100355]: 2025-12-04 10:18:05.476409292 +0000 UTC m=+1.109676410 container died 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 04 10:18:05 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 04 10:18:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39-merged.mount: Deactivated successfully.
Dec 04 10:18:05 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 04 10:18:05 compute-0 podman[100355]: 2025-12-04 10:18:05.529753697 +0000 UTC m=+1.163020815 container remove 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:18:05 compute-0 systemd[1]: libpod-conmon-95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389.scope: Deactivated successfully.
Dec 04 10:18:05 compute-0 sudo[100268]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 04 10:18:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:18:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 04 10:18:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 04 10:18:05 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 04 10:18:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:18:05 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 74 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[65,73)/1 crt=63'485 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:18:05 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=73/74 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=63'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:05 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:05 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=73/74 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:05 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:05 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 74 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=73/74 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=63'485 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:05 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 74 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=73/74 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[65,73)/1 crt=63'487 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:05 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 74 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[65,73)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:18:05 compute-0 sudo[100470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:18:05 compute-0 sudo[100470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:18:05 compute-0 sudo[100470]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:05 compute-0 ceph-mon[75358]: 5.1e scrub starts
Dec 04 10:18:05 compute-0 ceph-mon[75358]: 5.1e scrub ok
Dec 04 10:18:05 compute-0 ceph-mon[75358]: pgmap v173: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 04 10:18:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 04 10:18:05 compute-0 ceph-mon[75358]: osdmap e74: 3 total, 3 up, 3 in
Dec 04 10:18:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:18:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:18:05 compute-0 sshd-session[100458]: Invalid user customer from 74.249.218.27 port 44068
Dec 04 10:18:05 compute-0 sshd-session[100458]: Received disconnect from 74.249.218.27 port 44068:11: Bye Bye [preauth]
Dec 04 10:18:05 compute-0 sshd-session[100458]: Disconnected from invalid user customer 74.249.218.27 port 44068 [preauth]
Dec 04 10:18:06 compute-0 sshd-session[100495]: Invalid user syncthing from 107.175.213.239 port 50268
Dec 04 10:18:06 compute-0 sshd-session[100495]: Received disconnect from 107.175.213.239 port 50268:11: Bye Bye [preauth]
Dec 04 10:18:06 compute-0 sshd-session[100495]: Disconnected from invalid user syncthing 107.175.213.239 port 50268 [preauth]
Dec 04 10:18:06 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 04 10:18:06 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 04 10:18:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 04 10:18:06 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=11.337594986s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=53'483 lcod 0'0 active pruub 168.773162842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=11.337542534s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 168.773162842s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 04 10:18:06 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=11.336889267s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=63'486 lcod 63'486 active pruub 168.773162842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:06 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=11.336738586s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 168.773162842s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 04 10:18:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[57,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[57,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[57,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[57,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:06 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=0 lpr=75 pi=[57,75)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=0 lpr=75 pi=[57,75)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=0 lpr=75 pi=[57,75)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:06 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=0 lpr=75 pi=[57,75)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=73/74 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.978973389s) [2] async=[2] r=-1 lpr=75 pi=[65,75)/1 crt=63'487 active pruub 178.742630005s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=73/74 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.978861809s) [2] r=-1 lpr=75 pi=[65,75)/1 crt=63'487 unknown NOTIFY pruub 178.742630005s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.978198051s) [2] async=[2] r=-1 lpr=75 pi=[65,75)/1 crt=53'483 active pruub 178.742858887s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=73/74 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.977583885s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=63'485 active pruub 178.742523193s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.970883369s) [2] async=[2] r=-1 lpr=75 pi=[65,75)/1 crt=63'485 active pruub 178.735977173s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=73/74 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.977322578s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=63'485 unknown NOTIFY pruub 178.742523193s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.970640182s) [2] r=-1 lpr=75 pi=[65,75)/1 crt=63'485 unknown NOTIFY pruub 178.735977173s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:06 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.977385521s) [2] r=-1 lpr=75 pi=[65,75)/1 crt=53'483 unknown NOTIFY pruub 178.742858887s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Dec 04 10:18:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 04 10:18:06 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 04 10:18:06 compute-0 ceph-mon[75358]: 4.a scrub starts
Dec 04 10:18:06 compute-0 ceph-mon[75358]: 4.a scrub ok
Dec 04 10:18:06 compute-0 ceph-mon[75358]: 2.18 scrub starts
Dec 04 10:18:06 compute-0 ceph-mon[75358]: 2.18 scrub ok
Dec 04 10:18:06 compute-0 ceph-mon[75358]: osdmap e75: 3 total, 3 up, 3 in
Dec 04 10:18:07 compute-0 sshd-session[100496]: Invalid user dmdba from 103.179.218.243 port 41378
Dec 04 10:18:07 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 04 10:18:07 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 04 10:18:07 compute-0 sshd-session[100496]: Received disconnect from 103.179.218.243 port 41378:11: Bye Bye [preauth]
Dec 04 10:18:07 compute-0 sshd-session[100496]: Disconnected from invalid user dmdba 103.179.218.243 port 41378 [preauth]
Dec 04 10:18:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 04 10:18:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 04 10:18:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 04 10:18:07 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 04 10:18:07 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 76 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:07 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 76 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=75/76 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:07 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 76 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=75/76 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:07 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=75/76 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[57,75)/1 crt=63'487 lcod 63'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:07 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=75/76 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[57,75)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:07 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 76 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=75/76 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:07 compute-0 ceph-mon[75358]: 3.e scrub starts
Dec 04 10:18:07 compute-0 ceph-mon[75358]: 3.e scrub ok
Dec 04 10:18:07 compute-0 ceph-mon[75358]: 2.7 scrub starts
Dec 04 10:18:07 compute-0 ceph-mon[75358]: 2.7 scrub ok
Dec 04 10:18:07 compute-0 ceph-mon[75358]: pgmap v176: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Dec 04 10:18:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 04 10:18:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 04 10:18:07 compute-0 ceph-mon[75358]: osdmap e76: 3 total, 3 up, 3 in
Dec 04 10:18:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 04 10:18:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 04 10:18:08 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77 pruub=14.993161201s) [2] async=[2] r=-1 lpr=77 pi=[57,77)/1 crt=63'487 lcod 63'486 active pruub 174.463943481s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:08 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77 pruub=14.993092537s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=63'487 lcod 63'486 unknown NOTIFY pruub 174.463943481s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:08 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=75/76 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77 pruub=14.993423462s) [2] async=[2] r=-1 lpr=77 pi=[57,77)/1 crt=53'483 lcod 0'0 active pruub 174.464141846s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:08 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=75/76 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77 pruub=14.993075371s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 174.464141846s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:08 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 04 10:18:08 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 77 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:08 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 77 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:08 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 77 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:08 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 77 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Dec 04 10:18:08 compute-0 ceph-mon[75358]: 4.1 scrub starts
Dec 04 10:18:08 compute-0 ceph-mon[75358]: 4.1 scrub ok
Dec 04 10:18:08 compute-0 ceph-mon[75358]: osdmap e77: 3 total, 3 up, 3 in
Dec 04 10:18:09 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 04 10:18:09 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 04 10:18:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 04 10:18:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 04 10:18:09 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 04 10:18:09 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 78 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=77/78 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:09 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 78 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=77/78 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:09 compute-0 ceph-mon[75358]: pgmap v179: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Dec 04 10:18:09 compute-0 ceph-mon[75358]: osdmap e78: 3 total, 3 up, 3 in
Dec 04 10:18:10 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 04 10:18:10 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 04 10:18:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 04 10:18:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 04 10:18:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:10 compute-0 ceph-mon[75358]: 5.7 scrub starts
Dec 04 10:18:10 compute-0 ceph-mon[75358]: 5.7 scrub ok
Dec 04 10:18:11 compute-0 sudo[100229]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:11 compute-0 sshd-session[99644]: Connection closed by 192.168.122.30 port 59188
Dec 04 10:18:11 compute-0 sshd-session[99630]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:18:11 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 04 10:18:11 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 04 10:18:11 compute-0 systemd[1]: session-34.scope: Consumed 8.455s CPU time.
Dec 04 10:18:11 compute-0 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Dec 04 10:18:11 compute-0 systemd-logind[798]: Removed session 34.
Dec 04 10:18:11 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 04 10:18:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 281 B/s, 6 objects/s recovering
Dec 04 10:18:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 04 10:18:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 04 10:18:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 04 10:18:12 compute-0 ceph-mon[75358]: 6.11 scrub starts
Dec 04 10:18:12 compute-0 ceph-mon[75358]: 6.11 scrub ok
Dec 04 10:18:12 compute-0 ceph-mon[75358]: 2.5 scrub starts
Dec 04 10:18:12 compute-0 ceph-mon[75358]: 2.5 scrub ok
Dec 04 10:18:12 compute-0 ceph-mon[75358]: pgmap v181: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 04 10:18:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 04 10:18:12 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 04 10:18:13 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 04 10:18:13 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 04 10:18:13 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 04 10:18:13 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 04 10:18:13 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 04 10:18:13 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 04 10:18:13 compute-0 ceph-mon[75358]: 7.1 scrub starts
Dec 04 10:18:13 compute-0 ceph-mon[75358]: 7.1 scrub ok
Dec 04 10:18:13 compute-0 ceph-mon[75358]: pgmap v182: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 281 B/s, 6 objects/s recovering
Dec 04 10:18:13 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 04 10:18:13 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 04 10:18:13 compute-0 ceph-mon[75358]: osdmap e79: 3 total, 3 up, 3 in
Dec 04 10:18:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 277 B/s, 6 objects/s recovering
Dec 04 10:18:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 04 10:18:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 04 10:18:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 04 10:18:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 04 10:18:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 04 10:18:14 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 04 10:18:14 compute-0 ceph-mon[75358]: 3.7 scrub starts
Dec 04 10:18:14 compute-0 ceph-mon[75358]: 3.7 scrub ok
Dec 04 10:18:14 compute-0 ceph-mon[75358]: 2.1d scrub starts
Dec 04 10:18:14 compute-0 ceph-mon[75358]: 2.1d scrub ok
Dec 04 10:18:14 compute-0 ceph-mon[75358]: 2.3 scrub starts
Dec 04 10:18:14 compute-0 ceph-mon[75358]: 2.3 scrub ok
Dec 04 10:18:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 04 10:18:15 compute-0 ceph-mon[75358]: pgmap v184: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 277 B/s, 6 objects/s recovering
Dec 04 10:18:15 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 04 10:18:15 compute-0 ceph-mon[75358]: osdmap e80: 3 total, 3 up, 3 in
Dec 04 10:18:16 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 04 10:18:16 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 04 10:18:16 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 04 10:18:16 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 04 10:18:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 238 B/s, 5 objects/s recovering
Dec 04 10:18:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 04 10:18:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 04 10:18:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 04 10:18:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 04 10:18:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 04 10:18:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 04 10:18:16 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 04 10:18:17 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 04 10:18:17 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 04 10:18:17 compute-0 ceph-mon[75358]: 4.e scrub starts
Dec 04 10:18:17 compute-0 ceph-mon[75358]: 4.e scrub ok
Dec 04 10:18:17 compute-0 ceph-mon[75358]: 5.4 scrub starts
Dec 04 10:18:17 compute-0 ceph-mon[75358]: 5.4 scrub ok
Dec 04 10:18:17 compute-0 ceph-mon[75358]: pgmap v186: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 238 B/s, 5 objects/s recovering
Dec 04 10:18:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 04 10:18:17 compute-0 ceph-mon[75358]: osdmap e81: 3 total, 3 up, 3 in
Dec 04 10:18:18 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 04 10:18:18 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 04 10:18:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 04 10:18:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 04 10:18:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 04 10:18:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 04 10:18:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 04 10:18:18 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 04 10:18:18 compute-0 ceph-mon[75358]: 2.1c scrub starts
Dec 04 10:18:18 compute-0 ceph-mon[75358]: 2.1c scrub ok
Dec 04 10:18:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 04 10:18:19 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 04 10:18:19 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 04 10:18:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:19 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=14.316296577s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=53'483 lcod 0'0 active pruub 184.772720337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:19 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=14.315694809s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 184.772720337s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:19 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=14.316226959s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=63'486 lcod 63'486 active pruub 184.773544312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:19 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=14.316187859s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 184.773544312s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:19 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:19 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 04 10:18:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 04 10:18:19 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 04 10:18:19 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:19 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:19 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:19 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:19 compute-0 ceph-mon[75358]: 2.a scrub starts
Dec 04 10:18:19 compute-0 ceph-mon[75358]: 2.a scrub ok
Dec 04 10:18:19 compute-0 ceph-mon[75358]: pgmap v188: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:19 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 04 10:18:19 compute-0 ceph-mon[75358]: osdmap e82: 3 total, 3 up, 3 in
Dec 04 10:18:19 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:19 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:19 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:19 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:20 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 04 10:18:20 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 04 10:18:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 04 10:18:20 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 04 10:18:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 04 10:18:20 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 04 10:18:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 04 10:18:20 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 04 10:18:20 compute-0 ceph-mon[75358]: 2.19 scrub starts
Dec 04 10:18:20 compute-0 ceph-mon[75358]: 2.19 scrub ok
Dec 04 10:18:20 compute-0 ceph-mon[75358]: osdmap e83: 3 total, 3 up, 3 in
Dec 04 10:18:20 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 04 10:18:21 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 04 10:18:21 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=83/84 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[57,83)/1 crt=63'487 lcod 63'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:21 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 04 10:18:21 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 04 10:18:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 04 10:18:21 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 04 10:18:21 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 85 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:21 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 85 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:21 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 85 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:21 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 85 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:21 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.411987305s) [2] async=[2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 active pruub 188.133209229s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:21 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=83/84 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.409049034s) [2] async=[2] r=-1 lpr=85 pi=[57,85)/1 crt=63'487 lcod 63'486 active pruub 188.130508423s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:21 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.411731720s) [2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 188.133209229s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:21 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=83/84 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.408908844s) [2] r=-1 lpr=85 pi=[57,85)/1 crt=63'487 lcod 63'486 unknown NOTIFY pruub 188.130508423s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:21 compute-0 ceph-mon[75358]: 2.d scrub starts
Dec 04 10:18:21 compute-0 ceph-mon[75358]: 2.d scrub ok
Dec 04 10:18:21 compute-0 ceph-mon[75358]: pgmap v191: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:21 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 04 10:18:21 compute-0 ceph-mon[75358]: osdmap e84: 3 total, 3 up, 3 in
Dec 04 10:18:21 compute-0 ceph-mon[75358]: 3.5 scrub starts
Dec 04 10:18:21 compute-0 ceph-mon[75358]: 3.5 scrub ok
Dec 04 10:18:22 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 04 10:18:22 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 04 10:18:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 04 10:18:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 04 10:18:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 04 10:18:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 04 10:18:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 04 10:18:22 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 04 10:18:22 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 86 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:22 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 86 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=85/86 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:22 compute-0 ceph-mon[75358]: osdmap e85: 3 total, 3 up, 3 in
Dec 04 10:18:22 compute-0 ceph-mon[75358]: 7.c scrub starts
Dec 04 10:18:22 compute-0 ceph-mon[75358]: 7.c scrub ok
Dec 04 10:18:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 04 10:18:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 04 10:18:22 compute-0 ceph-mon[75358]: osdmap e86: 3 total, 3 up, 3 in
Dec 04 10:18:23 compute-0 ceph-mon[75358]: pgmap v194: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:24 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 04 10:18:24 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 04 10:18:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 04 10:18:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 04 10:18:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 04 10:18:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 04 10:18:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 04 10:18:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 04 10:18:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 04 10:18:24 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 04 10:18:24 compute-0 ceph-mon[75358]: 3.8 scrub starts
Dec 04 10:18:24 compute-0 ceph-mon[75358]: 3.8 scrub ok
Dec 04 10:18:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 04 10:18:25 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 04 10:18:25 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 04 10:18:25 compute-0 ceph-mon[75358]: 5.9 scrub starts
Dec 04 10:18:25 compute-0 ceph-mon[75358]: 5.9 scrub ok
Dec 04 10:18:25 compute-0 ceph-mon[75358]: pgmap v196: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 04 10:18:25 compute-0 ceph-mon[75358]: osdmap e87: 3 total, 3 up, 3 in
Dec 04 10:18:25 compute-0 ceph-mon[75358]: 7.2 scrub starts
Dec 04 10:18:25 compute-0 ceph-mon[75358]: 7.2 scrub ok
Dec 04 10:18:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:18:26
Dec 04 10:18:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:18:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:18:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['.rgw.root', 'backups', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 04 10:18:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:18:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 71 B/s, 2 objects/s recovering
Dec 04 10:18:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 04 10:18:26 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 04 10:18:26 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 04 10:18:26 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 04 10:18:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 04 10:18:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 04 10:18:26 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 04 10:18:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 04 10:18:26 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 04 10:18:27 compute-0 sshd-session[100543]: Accepted publickey for zuul from 192.168.122.30 port 45756 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:18:27 compute-0 systemd-logind[798]: New session 35 of user zuul.
Dec 04 10:18:27 compute-0 systemd[1]: Started Session 35 of User zuul.
Dec 04 10:18:27 compute-0 sshd-session[100543]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:18:27 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 04 10:18:27 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:18:27 compute-0 python3.9[100696]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:18:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:18:27 compute-0 ceph-mon[75358]: pgmap v198: 321 pgs: 321 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 71 B/s, 2 objects/s recovering
Dec 04 10:18:27 compute-0 ceph-mon[75358]: 5.16 scrub starts
Dec 04 10:18:27 compute-0 ceph-mon[75358]: 5.16 scrub ok
Dec 04 10:18:27 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 04 10:18:27 compute-0 ceph-mon[75358]: osdmap e88: 3 total, 3 up, 3 in
Dec 04 10:18:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 04 10:18:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 04 10:18:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 04 10:18:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 04 10:18:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 04 10:18:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 04 10:18:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 04 10:18:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 04 10:18:28 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 04 10:18:28 compute-0 ceph-mon[75358]: 2.17 scrub starts
Dec 04 10:18:28 compute-0 ceph-mon[75358]: 2.17 scrub ok
Dec 04 10:18:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 04 10:18:29 compute-0 python3.9[100870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:18:29 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 04 10:18:29 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 04 10:18:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:29 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 04 10:18:29 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 04 10:18:30 compute-0 ceph-mon[75358]: pgmap v200: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 04 10:18:30 compute-0 ceph-mon[75358]: 5.11 scrub starts
Dec 04 10:18:30 compute-0 ceph-mon[75358]: 5.11 scrub ok
Dec 04 10:18:30 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 04 10:18:30 compute-0 ceph-mon[75358]: osdmap e89: 3 total, 3 up, 3 in
Dec 04 10:18:30 compute-0 ceph-mon[75358]: 7.e scrub starts
Dec 04 10:18:30 compute-0 ceph-mon[75358]: 7.e scrub ok
Dec 04 10:18:30 compute-0 sudo[101024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sstppojrabnigptekwrmdlvggbinuwkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843509.5396125-45-162877158345978/AnsiballZ_command.py'
Dec 04 10:18:30 compute-0 sudo[101024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:18:30 compute-0 python3.9[101026]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:18:30 compute-0 sudo[101024]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:30 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 04 10:18:30 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 04 10:18:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 04 10:18:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 04 10:18:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 04 10:18:30 compute-0 sudo[101177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipwmhaxaefetafwzhrqzfozizhkcgitw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843510.4564333-57-123370151046048/AnsiballZ_stat.py'
Dec 04 10:18:30 compute-0 sudo[101177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:18:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 04 10:18:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 04 10:18:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 04 10:18:31 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 04 10:18:31 compute-0 ceph-mon[75358]: 2.1b scrub starts
Dec 04 10:18:31 compute-0 ceph-mon[75358]: 2.1b scrub ok
Dec 04 10:18:31 compute-0 ceph-mon[75358]: 2.f scrub starts
Dec 04 10:18:31 compute-0 ceph-mon[75358]: 2.f scrub ok
Dec 04 10:18:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 04 10:18:31 compute-0 python3.9[101179]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:18:31 compute-0 sudo[101177]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:31 compute-0 sudo[101331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpqualxdywtnvjcdcbhuqtdwwewzwihb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843511.3132284-68-211862936246639/AnsiballZ_file.py'
Dec 04 10:18:31 compute-0 sudo[101331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:18:31 compute-0 python3.9[101333]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:18:31 compute-0 sudo[101331]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:32 compute-0 ceph-mon[75358]: pgmap v202: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 04 10:18:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 04 10:18:32 compute-0 ceph-mon[75358]: osdmap e90: 3 total, 3 up, 3 in
Dec 04 10:18:32 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 04 10:18:32 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 04 10:18:32 compute-0 sudo[101485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkesouzinojjctorheaywpncidqxorgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843512.153429-77-240964276326459/AnsiballZ_file.py'
Dec 04 10:18:32 compute-0 sudo[101485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:18:32 compute-0 python3.9[101487]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:18:32 compute-0 sudo[101485]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 04 10:18:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 04 10:18:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 04 10:18:33 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 04 10:18:33 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 04 10:18:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 04 10:18:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 04 10:18:33 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 90 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=90 pruub=9.780766487s) [2] r=-1 lpr=90 pi=[64,90)/1 crt=63'485 active pruub 200.290924072s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:33 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 90 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=90 pruub=9.780334473s) [2] r=-1 lpr=90 pi=[64,90)/1 crt=63'485 unknown NOTIFY pruub 200.290924072s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:33 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=90) [2] r=0 lpr=90 pi=[64,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:33 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 04 10:18:33 compute-0 ceph-mon[75358]: 6.f scrub starts
Dec 04 10:18:33 compute-0 ceph-mon[75358]: 6.f scrub ok
Dec 04 10:18:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 04 10:18:33 compute-0 python3.9[101637]: ansible-ansible.builtin.service_facts Invoked
Dec 04 10:18:33 compute-0 network[101654]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 04 10:18:33 compute-0 network[101655]: 'network-scripts' will be removed from distribution in near future.
Dec 04 10:18:33 compute-0 network[101656]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 04 10:18:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 04 10:18:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 04 10:18:34 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 04 10:18:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 92 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=0 lpr=92 pi=[64,92)/1 crt=63'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:34 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 92 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=0 lpr=92 pi=[64,92)/1 crt=63'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[64,92)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:34 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[64,92)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:34 compute-0 ceph-mon[75358]: pgmap v204: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:34 compute-0 ceph-mon[75358]: 4.1a scrub starts
Dec 04 10:18:34 compute-0 ceph-mon[75358]: 4.1a scrub ok
Dec 04 10:18:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 04 10:18:34 compute-0 ceph-mon[75358]: osdmap e91: 3 total, 3 up, 3 in
Dec 04 10:18:34 compute-0 ceph-mon[75358]: osdmap e92: 3 total, 3 up, 3 in
Dec 04 10:18:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 04 10:18:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 04 10:18:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 04 10:18:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 04 10:18:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 04 10:18:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 04 10:18:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 04 10:18:35 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 04 10:18:35 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 93 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=8.783335686s) [1] r=-1 lpr=93 pi=[65,93)/1 crt=53'483 active pruub 201.308776855s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:35 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 93 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=8.782843590s) [1] r=-1 lpr=93 pi=[65,93)/1 crt=53'483 unknown NOTIFY pruub 201.308776855s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:35 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=93) [1] r=0 lpr=93 pi=[65,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:35 compute-0 ceph-mon[75358]: pgmap v207: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 04 10:18:35 compute-0 ceph-mon[75358]: 6.1c scrub starts
Dec 04 10:18:35 compute-0 ceph-mon[75358]: 6.1c scrub ok
Dec 04 10:18:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 04 10:18:35 compute-0 ceph-mon[75358]: osdmap e93: 3 total, 3 up, 3 in
Dec 04 10:18:35 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 93 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=92/93 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] async=[2] r=0 lpr=92 pi=[64,92)/1 crt=63'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:35 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 04 10:18:35 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 04 10:18:36 compute-0 sshd-session[101695]: Invalid user int from 103.149.86.230 port 43720
Dec 04 10:18:36 compute-0 sshd-session[101695]: Received disconnect from 103.149.86.230 port 43720:11: Bye Bye [preauth]
Dec 04 10:18:36 compute-0 sshd-session[101695]: Disconnected from invalid user int 103.149.86.230 port 43720 [preauth]
Dec 04 10:18:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 04 10:18:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 04 10:18:36 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 94 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94) [2] r=0 lpr=94 pi=[64,94)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:36 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 94 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94) [2] r=0 lpr=94 pi=[64,94)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:36 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 04 10:18:36 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 94 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=92/93 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94 pruub=15.028572083s) [2] async=[2] r=-1 lpr=94 pi=[64,94)/1 crt=63'485 active pruub 208.567962646s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:36 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 94 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=92/93 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94 pruub=15.028485298s) [2] r=-1 lpr=94 pi=[64,94)/1 crt=63'485 unknown NOTIFY pruub 208.567962646s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:36 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 94 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] r=0 lpr=94 pi=[65,94)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:36 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 94 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] r=0 lpr=94 pi=[65,94)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:36 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:36 compute-0 ceph-mon[75358]: 4.10 scrub starts
Dec 04 10:18:36 compute-0 ceph-mon[75358]: 4.10 scrub ok
Dec 04 10:18:36 compute-0 ceph-mon[75358]: osdmap e94: 3 total, 3 up, 3 in
Dec 04 10:18:36 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 04 10:18:36 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 04 10:18:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 04 10:18:36 compute-0 python3.9[101918]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0771607641612692e-06 of space, bias 4.0, pg target 0.001292592916993523 quantized to 16 (current 32)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.324168131796575e-06 of space, bias 1.0, pg target 0.0012972504395389725 quantized to 32 (current 32)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:18:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:18:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 04 10:18:37 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 04 10:18:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 04 10:18:37 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 04 10:18:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 95 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=95 pruub=8.048609734s) [0] r=-1 lpr=95 pi=[73,95)/1 crt=53'483 active pruub 187.843475342s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 95 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=95 pruub=8.048556328s) [0] r=-1 lpr=95 pi=[73,95)/1 crt=53'483 unknown NOTIFY pruub 187.843475342s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:37 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=95) [0] r=0 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:37 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 95 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=94/95 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94) [2] r=0 lpr=94 pi=[64,94)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:37 compute-0 ceph-mon[75358]: 5.5 scrub starts
Dec 04 10:18:37 compute-0 ceph-mon[75358]: 5.5 scrub ok
Dec 04 10:18:37 compute-0 ceph-mon[75358]: pgmap v210: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 04 10:18:37 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 95 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=94/95 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[65,94)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:37 compute-0 python3.9[102068]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:18:38 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 04 10:18:38 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 04 10:18:38 compute-0 sshd-session[101435]: Connection closed by 101.47.163.20 port 59910 [preauth]
Dec 04 10:18:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 04 10:18:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:38 compute-0 python3.9[102222]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:18:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 04 10:18:39 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 04 10:18:39 compute-0 sudo[102378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otqdhcmrcxwfcenknwlksxlumggqrmns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843519.4167106-125-38009656140262/AnsiballZ_setup.py'
Dec 04 10:18:39 compute-0 sudo[102378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:18:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:39 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 96 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=94/95 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=14.043639183s) [1] async=[1] r=-1 lpr=96 pi=[65,96)/1 crt=53'483 active pruub 210.963851929s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:39 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 96 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=94/95 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=14.043495178s) [1] r=-1 lpr=96 pi=[65,96)/1 crt=53'483 unknown NOTIFY pruub 210.963851929s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:39 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[73,96)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:39 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[73,96)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 04 10:18:39 compute-0 ceph-mon[75358]: osdmap e95: 3 total, 3 up, 3 in
Dec 04 10:18:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 96 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 96 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:39 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 96 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] r=0 lpr=96 pi=[73,96)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:39 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 96 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] r=0 lpr=96 pi=[73,96)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:40 compute-0 python3.9[102380]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:18:40 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 04 10:18:40 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 04 10:18:40 compute-0 sudo[102378]: pam_unix(sudo:session): session closed for user root
Dec 04 10:18:40 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 04 10:18:40 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 04 10:18:40 compute-0 sudo[102462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykrcrfofetzabarirvunyxosgjqgmigx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843519.4167106-125-38009656140262/AnsiballZ_dnf.py'
Dec 04 10:18:40 compute-0 sudo[102462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:18:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 04 10:18:40 compute-0 python3.9[102464]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:18:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 04 10:18:41 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 04 10:18:41 compute-0 ceph-mon[75358]: 7.1a scrub starts
Dec 04 10:18:41 compute-0 ceph-mon[75358]: 7.1a scrub ok
Dec 04 10:18:41 compute-0 ceph-mon[75358]: pgmap v212: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:41 compute-0 ceph-mon[75358]: osdmap e96: 3 total, 3 up, 3 in
Dec 04 10:18:41 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 97 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=96/97 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:41 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 97 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=96/97 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] async=[0] r=0 lpr=96 pi=[73,96)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:41 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 04 10:18:41 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 04 10:18:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 04 10:18:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v216: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 1 objects/s recovering
Dec 04 10:18:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 04 10:18:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 04 10:18:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 04 10:18:43 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 04 10:18:43 compute-0 ceph-mon[75358]: 4.1b scrub starts
Dec 04 10:18:43 compute-0 ceph-mon[75358]: 4.1b scrub ok
Dec 04 10:18:43 compute-0 ceph-mon[75358]: 2.2 scrub starts
Dec 04 10:18:43 compute-0 ceph-mon[75358]: 2.2 scrub ok
Dec 04 10:18:43 compute-0 ceph-mon[75358]: pgmap v214: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:43 compute-0 ceph-mon[75358]: osdmap e97: 3 total, 3 up, 3 in
Dec 04 10:18:43 compute-0 ceph-mon[75358]: 2.b scrub starts
Dec 04 10:18:43 compute-0 ceph-mon[75358]: 2.b scrub ok
Dec 04 10:18:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 98 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=96/97 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98 pruub=14.271065712s) [0] async=[0] r=-1 lpr=98 pi=[73,98)/1 crt=53'483 active pruub 199.639892578s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:43 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 98 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=96/97 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98 pruub=14.270962715s) [0] r=-1 lpr=98 pi=[73,98)/1 crt=53'483 unknown NOTIFY pruub 199.639892578s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 98 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98) [0] r=0 lpr=98 pi=[73,98)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:43 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 98 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98) [0] r=0 lpr=98 pi=[73,98)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:43 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 04 10:18:43 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 04 10:18:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 04 10:18:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 04 10:18:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 04 10:18:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 04 10:18:44 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 04 10:18:44 compute-0 ceph-mon[75358]: pgmap v216: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 1 objects/s recovering
Dec 04 10:18:44 compute-0 ceph-mon[75358]: 2.15 scrub starts
Dec 04 10:18:44 compute-0 ceph-mon[75358]: 2.15 scrub ok
Dec 04 10:18:44 compute-0 ceph-mon[75358]: osdmap e98: 3 total, 3 up, 3 in
Dec 04 10:18:44 compute-0 ceph-mon[75358]: 5.3 scrub starts
Dec 04 10:18:44 compute-0 ceph-mon[75358]: 5.3 scrub ok
Dec 04 10:18:44 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 99 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=98/99 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98) [0] r=0 lpr=98 pi=[73,98)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v219: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 1 objects/s recovering
Dec 04 10:18:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 04 10:18:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 04 10:18:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:45 compute-0 ceph-mon[75358]: 6.1d scrub starts
Dec 04 10:18:45 compute-0 ceph-mon[75358]: 6.1d scrub ok
Dec 04 10:18:45 compute-0 ceph-mon[75358]: osdmap e99: 3 total, 3 up, 3 in
Dec 04 10:18:45 compute-0 ceph-mon[75358]: pgmap v219: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 1 objects/s recovering
Dec 04 10:18:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 04 10:18:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 04 10:18:46 compute-0 ceph-mon[75358]: 5.12 scrub starts
Dec 04 10:18:46 compute-0 ceph-mon[75358]: 5.12 scrub ok
Dec 04 10:18:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 1 objects/s recovering
Dec 04 10:18:47 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 04 10:18:47 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 04 10:18:47 compute-0 ceph-mon[75358]: 4.14 scrub starts
Dec 04 10:18:47 compute-0 ceph-mon[75358]: 4.14 scrub ok
Dec 04 10:18:47 compute-0 ceph-mon[75358]: pgmap v220: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 1 objects/s recovering
Dec 04 10:18:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec 04 10:18:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec 04 10:18:48 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 04 10:18:48 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 04 10:18:48 compute-0 ceph-mon[75358]: 3.1d scrub starts
Dec 04 10:18:48 compute-0 ceph-mon[75358]: 3.1d scrub ok
Dec 04 10:18:48 compute-0 ceph-mon[75358]: 6.17 scrub starts
Dec 04 10:18:48 compute-0 ceph-mon[75358]: 6.17 scrub ok
Dec 04 10:18:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Dec 04 10:18:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 04 10:18:48 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 04 10:18:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 04 10:18:49 compute-0 ceph-mon[75358]: 5.2 scrub starts
Dec 04 10:18:49 compute-0 ceph-mon[75358]: 5.2 scrub ok
Dec 04 10:18:49 compute-0 ceph-mon[75358]: pgmap v221: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Dec 04 10:18:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 04 10:18:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 04 10:18:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 04 10:18:49 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 04 10:18:49 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 04 10:18:49 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 04 10:18:49 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 04 10:18:49 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 04 10:18:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 04 10:18:50 compute-0 ceph-mon[75358]: osdmap e100: 3 total, 3 up, 3 in
Dec 04 10:18:50 compute-0 ceph-mon[75358]: 2.8 scrub starts
Dec 04 10:18:50 compute-0 ceph-mon[75358]: 2.8 scrub ok
Dec 04 10:18:50 compute-0 ceph-mon[75358]: 4.8 scrub starts
Dec 04 10:18:50 compute-0 ceph-mon[75358]: 4.8 scrub ok
Dec 04 10:18:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Dec 04 10:18:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 04 10:18:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 04 10:18:51 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 04 10:18:51 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 04 10:18:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 04 10:18:51 compute-0 ceph-mon[75358]: pgmap v223: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Dec 04 10:18:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 04 10:18:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 04 10:18:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 04 10:18:51 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 04 10:18:52 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 04 10:18:52 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 04 10:18:52 compute-0 ceph-mon[75358]: 10.a scrub starts
Dec 04 10:18:52 compute-0 ceph-mon[75358]: 10.a scrub ok
Dec 04 10:18:52 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 04 10:18:52 compute-0 ceph-mon[75358]: osdmap e101: 3 total, 3 up, 3 in
Dec 04 10:18:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Dec 04 10:18:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 04 10:18:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 04 10:18:53 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 04 10:18:53 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 04 10:18:53 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 04 10:18:53 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 04 10:18:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 04 10:18:53 compute-0 ceph-mon[75358]: 10.1b scrub starts
Dec 04 10:18:53 compute-0 ceph-mon[75358]: 10.1b scrub ok
Dec 04 10:18:53 compute-0 ceph-mon[75358]: pgmap v225: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Dec 04 10:18:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 04 10:18:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 04 10:18:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 04 10:18:53 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 04 10:18:53 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 102 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=102 pruub=14.467451096s) [2] r=-1 lpr=102 pi=[65,102)/1 crt=63'487 active pruub 225.304885864s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:53 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 102 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=102 pruub=14.467391014s) [2] r=-1 lpr=102 pi=[65,102)/1 crt=63'487 unknown NOTIFY pruub 225.304885864s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:53 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=102) [2] r=0 lpr=102 pi=[65,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 04 10:18:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 04 10:18:54 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 04 10:18:54 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[65,103)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:54 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[65,103)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:54 compute-0 ceph-mon[75358]: 10.1f scrub starts
Dec 04 10:18:54 compute-0 ceph-mon[75358]: 10.1f scrub ok
Dec 04 10:18:54 compute-0 ceph-mon[75358]: 2.16 scrub starts
Dec 04 10:18:54 compute-0 ceph-mon[75358]: 2.16 scrub ok
Dec 04 10:18:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 04 10:18:54 compute-0 ceph-mon[75358]: osdmap e102: 3 total, 3 up, 3 in
Dec 04 10:18:54 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 103 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] r=0 lpr=103 pi=[65,103)/1 crt=63'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:54 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 103 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] r=0 lpr=103 pi=[65,103)/1 crt=63'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 04 10:18:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 04 10:18:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:55 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 04 10:18:55 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 04 10:18:55 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 04 10:18:55 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 04 10:18:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 04 10:18:55 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 04 10:18:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 04 10:18:55 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 04 10:18:55 compute-0 ceph-mon[75358]: osdmap e103: 3 total, 3 up, 3 in
Dec 04 10:18:55 compute-0 ceph-mon[75358]: pgmap v228: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 04 10:18:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 04 10:18:56 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 104 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=103/104 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] async=[2] r=0 lpr=103 pi=[65,103)/1 crt=63'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 04 10:18:56 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 04 10:18:56 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 04 10:18:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 04 10:18:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 04 10:18:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:56 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 04 10:18:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 04 10:18:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 04 10:18:56 compute-0 ceph-mon[75358]: 10.1c scrub starts
Dec 04 10:18:56 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 105 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=103/104 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105 pruub=15.487646103s) [2] async=[2] r=-1 lpr=105 pi=[65,105)/1 crt=63'487 active pruub 229.378829956s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:56 compute-0 ceph-mon[75358]: 10.1c scrub ok
Dec 04 10:18:56 compute-0 ceph-mon[75358]: 2.13 scrub starts
Dec 04 10:18:56 compute-0 ceph-mon[75358]: 2.13 scrub ok
Dec 04 10:18:56 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 04 10:18:56 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 105 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=103/104 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105 pruub=15.487515450s) [2] r=-1 lpr=105 pi=[65,105)/1 crt=63'487 unknown NOTIFY pruub 229.378829956s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:56 compute-0 ceph-mon[75358]: osdmap e104: 3 total, 3 up, 3 in
Dec 04 10:18:56 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 105 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105) [2] r=0 lpr=105 pi=[65,105)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:56 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 105 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105) [2] r=0 lpr=105 pi=[65,105)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:18:57 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 04 10:18:57 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 04 10:18:57 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 04 10:18:57 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 04 10:18:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 04 10:18:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 04 10:18:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 04 10:18:57 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 04 10:18:57 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 106 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=105/106 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105) [2] r=0 lpr=105 pi=[65,105)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:18:57 compute-0 ceph-mon[75358]: 10.1d scrub starts
Dec 04 10:18:57 compute-0 ceph-mon[75358]: 10.1d scrub ok
Dec 04 10:18:57 compute-0 ceph-mon[75358]: 2.11 scrub starts
Dec 04 10:18:57 compute-0 ceph-mon[75358]: 2.11 scrub ok
Dec 04 10:18:57 compute-0 ceph-mon[75358]: pgmap v231: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:57 compute-0 ceph-mon[75358]: osdmap e105: 3 total, 3 up, 3 in
Dec 04 10:18:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 04 10:18:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 04 10:18:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:18:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:18:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:18:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:18:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:18:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:18:58 compute-0 sshd-session[102543]: Invalid user radarr from 217.154.62.22 port 56800
Dec 04 10:18:58 compute-0 sshd-session[102543]: Received disconnect from 217.154.62.22 port 56800:11: Bye Bye [preauth]
Dec 04 10:18:58 compute-0 sshd-session[102543]: Disconnected from invalid user radarr 217.154.62.22 port 56800 [preauth]
Dec 04 10:18:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 04 10:18:58 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 04 10:18:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 04 10:18:58 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 04 10:18:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 04 10:18:58 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 04 10:18:58 compute-0 ceph-mon[75358]: 10.18 scrub starts
Dec 04 10:18:58 compute-0 ceph-mon[75358]: 10.18 scrub ok
Dec 04 10:18:58 compute-0 ceph-mon[75358]: 5.c scrub starts
Dec 04 10:18:58 compute-0 ceph-mon[75358]: 5.c scrub ok
Dec 04 10:18:58 compute-0 ceph-mon[75358]: osdmap e106: 3 total, 3 up, 3 in
Dec 04 10:18:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 04 10:18:59 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 04 10:18:59 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 04 10:18:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:18:59 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 107 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=107 pruub=10.985615730s) [0] r=-1 lpr=107 pi=[85,107)/1 crt=63'487 active pruub 213.138580322s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:18:59 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 107 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=107 pruub=10.985573769s) [0] r=-1 lpr=107 pi=[85,107)/1 crt=63'487 unknown NOTIFY pruub 213.138580322s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:18:59 compute-0 ceph-mon[75358]: pgmap v233: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:18:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 04 10:18:59 compute-0 ceph-mon[75358]: osdmap e107: 3 total, 3 up, 3 in
Dec 04 10:18:59 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=107) [0] r=0 lpr=107 pi=[85,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:19:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 04 10:19:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 04 10:19:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 04 10:19:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 04 10:19:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 04 10:19:00 compute-0 ceph-mon[75358]: 7.1b scrub starts
Dec 04 10:19:00 compute-0 ceph-mon[75358]: 7.1b scrub ok
Dec 04 10:19:00 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 04 10:19:00 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[85,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:00 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[85,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:19:00 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 04 10:19:00 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 108 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] r=0 lpr=108 pi=[85,108)/1 crt=63'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:00 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 108 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] r=0 lpr=108 pi=[85,108)/1 crt=63'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:19:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 04 10:19:01 compute-0 ceph-mon[75358]: pgmap v235: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 04 10:19:01 compute-0 ceph-mon[75358]: osdmap e108: 3 total, 3 up, 3 in
Dec 04 10:19:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 04 10:19:01 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 04 10:19:01 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 109 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=108/109 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[85,108)/1 crt=63'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:19:02 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 04 10:19:02 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 04 10:19:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 83 B/s, 1 objects/s recovering
Dec 04 10:19:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 04 10:19:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 04 10:19:02 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 04 10:19:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 110 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=108/109 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110 pruub=15.003255844s) [0] async=[0] r=-1 lpr=110 pi=[85,110)/1 crt=63'487 active pruub 220.203842163s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:02 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 110 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=108/109 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110 pruub=15.003170967s) [0] r=-1 lpr=110 pi=[85,110)/1 crt=63'487 unknown NOTIFY pruub 220.203842163s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:19:02 compute-0 ceph-mon[75358]: osdmap e109: 3 total, 3 up, 3 in
Dec 04 10:19:02 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 110 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110) [0] r=0 lpr=110 pi=[85,110)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:02 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 110 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110) [0] r=0 lpr=110 pi=[85,110)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:19:03 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 04 10:19:03 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 04 10:19:03 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 04 10:19:03 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 04 10:19:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 04 10:19:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 04 10:19:03 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 04 10:19:03 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 111 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=110/111 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110) [0] r=0 lpr=110 pi=[85,110)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:19:03 compute-0 ceph-mon[75358]: 5.13 scrub starts
Dec 04 10:19:03 compute-0 ceph-mon[75358]: 5.13 scrub ok
Dec 04 10:19:03 compute-0 ceph-mon[75358]: pgmap v238: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 83 B/s, 1 objects/s recovering
Dec 04 10:19:04 compute-0 ceph-mon[75358]: osdmap e110: 3 total, 3 up, 3 in
Dec 04 10:19:04 compute-0 ceph-mon[75358]: 10.5 scrub starts
Dec 04 10:19:04 compute-0 ceph-mon[75358]: 10.5 scrub ok
Dec 04 10:19:04 compute-0 ceph-mon[75358]: 3.12 scrub starts
Dec 04 10:19:04 compute-0 ceph-mon[75358]: 3.12 scrub ok
Dec 04 10:19:04 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 04 10:19:04 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 04 10:19:04 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 04 10:19:04 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 04 10:19:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 4 objects/s recovering
Dec 04 10:19:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:05 compute-0 ceph-mon[75358]: osdmap e111: 3 total, 3 up, 3 in
Dec 04 10:19:05 compute-0 ceph-mon[75358]: 10.c scrub starts
Dec 04 10:19:05 compute-0 ceph-mon[75358]: 10.c scrub ok
Dec 04 10:19:05 compute-0 ceph-mon[75358]: 3.15 scrub starts
Dec 04 10:19:05 compute-0 ceph-mon[75358]: 3.15 scrub ok
Dec 04 10:19:05 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 04 10:19:05 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 04 10:19:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 04 10:19:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 04 10:19:05 compute-0 sudo[102585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:19:05 compute-0 sudo[102585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:19:05 compute-0 sudo[102585]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:05 compute-0 sudo[102610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:19:05 compute-0 sudo[102610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:19:06 compute-0 ceph-mon[75358]: pgmap v241: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 4 objects/s recovering
Dec 04 10:19:06 compute-0 ceph-mon[75358]: 10.0 scrub starts
Dec 04 10:19:06 compute-0 ceph-mon[75358]: 10.0 scrub ok
Dec 04 10:19:06 compute-0 sudo[102610]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 04 10:19:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 04 10:19:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:19:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:19:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:19:06 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:19:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:19:06 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:19:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:19:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:19:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:19:06 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:19:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:19:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:19:06 compute-0 sudo[102666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:19:06 compute-0 sudo[102666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:19:06 compute-0 sudo[102666]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:06 compute-0 sudo[102691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:19:06 compute-0 sudo[102691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:19:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 143 B/s, 3 objects/s recovering
Dec 04 10:19:06 compute-0 podman[102728]: 2025-12-04 10:19:06.888262219 +0000 UTC m=+0.045671587 container create e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 04 10:19:06 compute-0 systemd[1]: Started libpod-conmon-e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04.scope.
Dec 04 10:19:06 compute-0 podman[102728]: 2025-12-04 10:19:06.865712923 +0000 UTC m=+0.023122311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:19:06 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:19:07 compute-0 podman[102728]: 2025-12-04 10:19:07.00554695 +0000 UTC m=+0.162956358 container init e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:19:07 compute-0 podman[102728]: 2025-12-04 10:19:07.01423861 +0000 UTC m=+0.171647988 container start e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 04 10:19:07 compute-0 eager_lewin[102745]: 167 167
Dec 04 10:19:07 compute-0 systemd[1]: libpod-e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04.scope: Deactivated successfully.
Dec 04 10:19:07 compute-0 podman[102728]: 2025-12-04 10:19:07.02289765 +0000 UTC m=+0.180307028 container attach e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:19:07 compute-0 podman[102728]: 2025-12-04 10:19:07.023762321 +0000 UTC m=+0.181171709 container died e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 04 10:19:07 compute-0 ceph-mon[75358]: 6.e scrub starts
Dec 04 10:19:07 compute-0 ceph-mon[75358]: 6.e scrub ok
Dec 04 10:19:07 compute-0 ceph-mon[75358]: 3.17 scrub starts
Dec 04 10:19:07 compute-0 ceph-mon[75358]: 3.17 scrub ok
Dec 04 10:19:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:19:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:19:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:19:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:19:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:19:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a838cb98d6f455be4aaeedb599f025adf799ce942c4e352b39080a004dcd7517-merged.mount: Deactivated successfully.
Dec 04 10:19:07 compute-0 podman[102728]: 2025-12-04 10:19:07.077676886 +0000 UTC m=+0.235086264 container remove e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:19:07 compute-0 systemd[1]: libpod-conmon-e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04.scope: Deactivated successfully.
Dec 04 10:19:07 compute-0 podman[102768]: 2025-12-04 10:19:07.268262032 +0000 UTC m=+0.057944504 container create 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:19:07 compute-0 systemd[1]: Started libpod-conmon-205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba.scope.
Dec 04 10:19:07 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:19:07 compute-0 podman[102768]: 2025-12-04 10:19:07.239264929 +0000 UTC m=+0.028947481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:07 compute-0 podman[102768]: 2025-12-04 10:19:07.353853695 +0000 UTC m=+0.143536167 container init 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:19:07 compute-0 podman[102768]: 2025-12-04 10:19:07.361345897 +0000 UTC m=+0.151028369 container start 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:19:07 compute-0 podman[102768]: 2025-12-04 10:19:07.365636131 +0000 UTC m=+0.155318603 container attach 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:19:07 compute-0 intelligent_wescoff[102785]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:19:07 compute-0 intelligent_wescoff[102785]: --> All data devices are unavailable
Dec 04 10:19:07 compute-0 systemd[1]: libpod-205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba.scope: Deactivated successfully.
Dec 04 10:19:07 compute-0 podman[102768]: 2025-12-04 10:19:07.859453909 +0000 UTC m=+0.649136381 container died 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Dec 04 10:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b-merged.mount: Deactivated successfully.
Dec 04 10:19:07 compute-0 podman[102768]: 2025-12-04 10:19:07.932231224 +0000 UTC m=+0.721913696 container remove 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:19:07 compute-0 systemd[1]: libpod-conmon-205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba.scope: Deactivated successfully.
Dec 04 10:19:07 compute-0 sudo[102691]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:08 compute-0 ceph-mon[75358]: pgmap v242: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 143 B/s, 3 objects/s recovering
Dec 04 10:19:08 compute-0 sudo[102818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:19:08 compute-0 sudo[102818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:19:08 compute-0 sudo[102818]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:08 compute-0 sudo[102843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:19:08 compute-0 sudo[102843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:19:08 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec 04 10:19:08 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec 04 10:19:08 compute-0 podman[102880]: 2025-12-04 10:19:08.402964183 +0000 UTC m=+0.035028800 container create 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:19:08 compute-0 systemd[1]: Started libpod-conmon-731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e.scope.
Dec 04 10:19:08 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:19:08 compute-0 podman[102880]: 2025-12-04 10:19:08.387292682 +0000 UTC m=+0.019357319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:19:08 compute-0 podman[102880]: 2025-12-04 10:19:08.484777553 +0000 UTC m=+0.116842200 container init 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:19:08 compute-0 podman[102880]: 2025-12-04 10:19:08.495106423 +0000 UTC m=+0.127171040 container start 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 04 10:19:08 compute-0 podman[102880]: 2025-12-04 10:19:08.49910817 +0000 UTC m=+0.131172787 container attach 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 04 10:19:08 compute-0 gallant_swartz[102895]: 167 167
Dec 04 10:19:08 compute-0 podman[102880]: 2025-12-04 10:19:08.501721044 +0000 UTC m=+0.133785691 container died 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:19:08 compute-0 systemd[1]: libpod-731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e.scope: Deactivated successfully.
Dec 04 10:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c27fa89e14e2648e57303547ff29917aede5e9d2a385adc58d19e47e83e99dd4-merged.mount: Deactivated successfully.
Dec 04 10:19:08 compute-0 podman[102880]: 2025-12-04 10:19:08.542001099 +0000 UTC m=+0.174065716 container remove 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:19:08 compute-0 systemd[1]: libpod-conmon-731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e.scope: Deactivated successfully.
Dec 04 10:19:08 compute-0 podman[102918]: 2025-12-04 10:19:08.713082442 +0000 UTC m=+0.051660712 container create 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:19:08 compute-0 systemd[1]: Started libpod-conmon-76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490.scope.
Dec 04 10:19:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 1 objects/s recovering
Dec 04 10:19:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 04 10:19:08 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 04 10:19:08 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:19:08 compute-0 podman[102918]: 2025-12-04 10:19:08.69355262 +0000 UTC m=+0.032130920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:08 compute-0 podman[102918]: 2025-12-04 10:19:08.813958305 +0000 UTC m=+0.152536675 container init 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:19:08 compute-0 podman[102918]: 2025-12-04 10:19:08.829117323 +0000 UTC m=+0.167695593 container start 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec 04 10:19:08 compute-0 podman[102918]: 2025-12-04 10:19:08.833015787 +0000 UTC m=+0.171594167 container attach 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 04 10:19:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 04 10:19:09 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 04 10:19:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 04 10:19:09 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 04 10:19:09 compute-0 ceph-mon[75358]: 10.3 scrub starts
Dec 04 10:19:09 compute-0 ceph-mon[75358]: 10.3 scrub ok
Dec 04 10:19:09 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 04 10:19:09 compute-0 boring_tesla[102935]: {
Dec 04 10:19:09 compute-0 boring_tesla[102935]:     "0": [
Dec 04 10:19:09 compute-0 boring_tesla[102935]:         {
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "devices": [
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "/dev/loop3"
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             ],
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_name": "ceph_lv0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_size": "21470642176",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "name": "ceph_lv0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "tags": {
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.cluster_name": "ceph",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.crush_device_class": "",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.encrypted": "0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.objectstore": "bluestore",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.osd_id": "0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.type": "block",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.vdo": "0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.with_tpm": "0"
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             },
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "type": "block",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "vg_name": "ceph_vg0"
Dec 04 10:19:09 compute-0 boring_tesla[102935]:         }
Dec 04 10:19:09 compute-0 boring_tesla[102935]:     ],
Dec 04 10:19:09 compute-0 boring_tesla[102935]:     "1": [
Dec 04 10:19:09 compute-0 boring_tesla[102935]:         {
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "devices": [
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "/dev/loop4"
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             ],
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_name": "ceph_lv1",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_size": "21470642176",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "name": "ceph_lv1",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "tags": {
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.cluster_name": "ceph",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.crush_device_class": "",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.encrypted": "0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.objectstore": "bluestore",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.osd_id": "1",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.type": "block",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.vdo": "0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.with_tpm": "0"
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             },
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "type": "block",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "vg_name": "ceph_vg1"
Dec 04 10:19:09 compute-0 boring_tesla[102935]:         }
Dec 04 10:19:09 compute-0 boring_tesla[102935]:     ],
Dec 04 10:19:09 compute-0 boring_tesla[102935]:     "2": [
Dec 04 10:19:09 compute-0 boring_tesla[102935]:         {
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "devices": [
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "/dev/loop5"
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             ],
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_name": "ceph_lv2",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_size": "21470642176",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "name": "ceph_lv2",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "tags": {
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.cluster_name": "ceph",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.crush_device_class": "",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.encrypted": "0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.objectstore": "bluestore",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.osd_id": "2",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.type": "block",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.vdo": "0",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:                 "ceph.with_tpm": "0"
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             },
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "type": "block",
Dec 04 10:19:09 compute-0 boring_tesla[102935]:             "vg_name": "ceph_vg2"
Dec 04 10:19:09 compute-0 boring_tesla[102935]:         }
Dec 04 10:19:09 compute-0 boring_tesla[102935]:     ]
Dec 04 10:19:09 compute-0 boring_tesla[102935]: }
Dec 04 10:19:09 compute-0 systemd[1]: libpod-76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490.scope: Deactivated successfully.
Dec 04 10:19:09 compute-0 podman[102918]: 2025-12-04 10:19:09.149938252 +0000 UTC m=+0.488516542 container died 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1-merged.mount: Deactivated successfully.
Dec 04 10:19:09 compute-0 podman[102918]: 2025-12-04 10:19:09.200500447 +0000 UTC m=+0.539078707 container remove 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:19:09 compute-0 systemd[1]: libpod-conmon-76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490.scope: Deactivated successfully.
Dec 04 10:19:09 compute-0 sudo[102843]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:09 compute-0 sudo[102958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:19:09 compute-0 sudo[102958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:19:09 compute-0 sudo[102958]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:09 compute-0 sudo[102983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:19:09 compute-0 sudo[102983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:19:09 compute-0 podman[103020]: 2025-12-04 10:19:09.661715357 +0000 UTC m=+0.038142846 container create 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:19:09 compute-0 systemd[1]: Started libpod-conmon-8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c.scope.
Dec 04 10:19:09 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:19:09 compute-0 podman[103020]: 2025-12-04 10:19:09.738624109 +0000 UTC m=+0.115051618 container init 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:19:09 compute-0 podman[103020]: 2025-12-04 10:19:09.643503036 +0000 UTC m=+0.019930545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:19:09 compute-0 podman[103020]: 2025-12-04 10:19:09.744364228 +0000 UTC m=+0.120791717 container start 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:19:09 compute-0 podman[103020]: 2025-12-04 10:19:09.748481068 +0000 UTC m=+0.124908577 container attach 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:19:09 compute-0 pensive_hertz[103037]: 167 167
Dec 04 10:19:09 compute-0 systemd[1]: libpod-8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c.scope: Deactivated successfully.
Dec 04 10:19:09 compute-0 podman[103020]: 2025-12-04 10:19:09.750919586 +0000 UTC m=+0.127347075 container died 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6214015c1567bb69c76d3fda0e3dbdcd850b8eb5c98d551abbcb0262627dedd4-merged.mount: Deactivated successfully.
Dec 04 10:19:09 compute-0 podman[103020]: 2025-12-04 10:19:09.785664098 +0000 UTC m=+0.162091587 container remove 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:19:09 compute-0 systemd[1]: libpod-conmon-8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c.scope: Deactivated successfully.
Dec 04 10:19:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:09 compute-0 podman[103062]: 2025-12-04 10:19:09.928872416 +0000 UTC m=+0.040129552 container create f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 10:19:09 compute-0 systemd[1]: Started libpod-conmon-f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae.scope.
Dec 04 10:19:10 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:10 compute-0 podman[103062]: 2025-12-04 10:19:09.90923993 +0000 UTC m=+0.020497076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:19:10 compute-0 podman[103062]: 2025-12-04 10:19:10.018266711 +0000 UTC m=+0.129523897 container init f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:19:10 compute-0 podman[103062]: 2025-12-04 10:19:10.027029644 +0000 UTC m=+0.138286780 container start f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:19:10 compute-0 podman[103062]: 2025-12-04 10:19:10.043454371 +0000 UTC m=+0.154711517 container attach f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 04 10:19:10 compute-0 ceph-mon[75358]: pgmap v243: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 1 objects/s recovering
Dec 04 10:19:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 04 10:19:10 compute-0 ceph-mon[75358]: osdmap e112: 3 total, 3 up, 3 in
Dec 04 10:19:10 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 04 10:19:10 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 04 10:19:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 04 10:19:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 04 10:19:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 1 objects/s recovering
Dec 04 10:19:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 04 10:19:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:19:10 compute-0 lvm[103155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:19:10 compute-0 lvm[103155]: VG ceph_vg0 finished
Dec 04 10:19:10 compute-0 lvm[103157]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:19:10 compute-0 lvm[103157]: VG ceph_vg1 finished
Dec 04 10:19:10 compute-0 lvm[103159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:19:10 compute-0 lvm[103159]: VG ceph_vg2 finished
Dec 04 10:19:10 compute-0 silly_poincare[103078]: {}
Dec 04 10:19:10 compute-0 systemd[1]: libpod-f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae.scope: Deactivated successfully.
Dec 04 10:19:10 compute-0 podman[103062]: 2025-12-04 10:19:10.926666941 +0000 UTC m=+1.037924067 container died f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:19:10 compute-0 systemd[1]: libpod-f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae.scope: Consumed 1.370s CPU time.
Dec 04 10:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea-merged.mount: Deactivated successfully.
Dec 04 10:19:10 compute-0 podman[103062]: 2025-12-04 10:19:10.977280917 +0000 UTC m=+1.088538043 container remove f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:19:11 compute-0 systemd[1]: libpod-conmon-f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae.scope: Deactivated successfully.
Dec 04 10:19:11 compute-0 sudo[102983]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:19:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:19:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:19:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:19:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 04 10:19:11 compute-0 ceph-mon[75358]: 8.15 scrub starts
Dec 04 10:19:11 compute-0 ceph-mon[75358]: 8.15 scrub ok
Dec 04 10:19:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 04 10:19:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:19:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:19:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:19:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 04 10:19:11 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 04 10:19:11 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 113 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=113 pruub=8.574896812s) [1] r=-1 lpr=113 pi=[75,113)/1 crt=53'483 active pruub 221.872146606s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:11 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 113 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=113 pruub=8.574784279s) [1] r=-1 lpr=113 pi=[75,113)/1 crt=53'483 unknown NOTIFY pruub 221.872146606s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:19:11 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 112 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=112 pruub=14.546545029s) [0] r=-1 lpr=112 pi=[73,112)/1 crt=63'485 active pruub 227.844512939s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:11 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 113 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=112 pruub=14.546380043s) [0] r=-1 lpr=112 pi=[73,112)/1 crt=63'485 unknown NOTIFY pruub 227.844512939s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:19:11 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=113) [1] r=0 lpr=113 pi=[75,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:19:11 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=112) [0] r=0 lpr=113 pi=[73,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:19:11 compute-0 sudo[103173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:19:11 compute-0 sudo[103173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:19:11 compute-0 sudo[103173]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:12 compute-0 ceph-mon[75358]: 4.9 scrub starts
Dec 04 10:19:12 compute-0 ceph-mon[75358]: 4.9 scrub ok
Dec 04 10:19:12 compute-0 ceph-mon[75358]: pgmap v245: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 1 objects/s recovering
Dec 04 10:19:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 04 10:19:12 compute-0 ceph-mon[75358]: osdmap e113: 3 total, 3 up, 3 in
Dec 04 10:19:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 04 10:19:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 04 10:19:12 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 04 10:19:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[75,114)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[75,114)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:19:12 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[73,114)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:12 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[73,114)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 04 10:19:12 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 114 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] r=0 lpr=114 pi=[75,114)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:12 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 114 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] r=0 lpr=114 pi=[73,114)/1 crt=63'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:12 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 114 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] r=0 lpr=114 pi=[73,114)/1 crt=63'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:19:12 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 114 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] r=0 lpr=114 pi=[75,114)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 04 10:19:12 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 04 10:19:12 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 04 10:19:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 04 10:19:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 04 10:19:13 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 04 10:19:13 compute-0 ceph-mon[75358]: osdmap e114: 3 total, 3 up, 3 in
Dec 04 10:19:13 compute-0 ceph-mon[75358]: 8.2 scrub starts
Dec 04 10:19:13 compute-0 ceph-mon[75358]: 8.2 scrub ok
Dec 04 10:19:13 compute-0 ceph-mon[75358]: pgmap v248: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:13 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 115 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=114/115 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] async=[1] r=0 lpr=114 pi=[75,114)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:19:13 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 115 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=114/115 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] async=[0] r=0 lpr=114 pi=[73,114)/1 crt=63'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:19:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 04 10:19:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 04 10:19:14 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 04 10:19:14 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 116 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116) [0] r=0 lpr=116 pi=[73,116)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:14 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 116 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116) [0] r=0 lpr=116 pi=[73,116)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:19:14 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 116 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=114/115 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116 pruub=15.236147881s) [0] async=[0] r=-1 lpr=116 pi=[73,116)/1 crt=63'485 active pruub 231.565338135s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:14 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 116 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=114/115 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116 pruub=15.236060143s) [0] r=-1 lpr=116 pi=[73,116)/1 crt=63'485 unknown NOTIFY pruub 231.565338135s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:19:14 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 116 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=114/115 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116 pruub=14.986249924s) [1] async=[1] r=-1 lpr=116 pi=[75,116)/1 crt=53'483 active pruub 231.316085815s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:14 compute-0 ceph-osd[88205]: osd.2 pg_epoch: 116 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=114/115 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116 pruub=14.986185074s) [1] r=-1 lpr=116 pi=[75,116)/1 crt=53'483 unknown NOTIFY pruub 231.316085815s@ mbc={}] state<Start>: transitioning to Stray
Dec 04 10:19:14 compute-0 ceph-mon[75358]: osdmap e115: 3 total, 3 up, 3 in
Dec 04 10:19:14 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 116 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116) [1] r=0 lpr=116 pi=[75,116)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 04 10:19:14 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 116 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116) [1] r=0 lpr=116 pi=[75,116)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 04 10:19:14 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 04 10:19:14 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 04 10:19:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 04 10:19:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 04 10:19:15 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 04 10:19:15 compute-0 ceph-osd[86021]: osd.0 pg_epoch: 117 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=116/117 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116) [0] r=0 lpr=116 pi=[73,116)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:19:15 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 117 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=116/117 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116) [1] r=0 lpr=116 pi=[75,116)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 04 10:19:15 compute-0 ceph-mon[75358]: osdmap e116: 3 total, 3 up, 3 in
Dec 04 10:19:15 compute-0 ceph-mon[75358]: 7.13 scrub starts
Dec 04 10:19:15 compute-0 ceph-mon[75358]: 7.13 scrub ok
Dec 04 10:19:15 compute-0 ceph-mon[75358]: pgmap v251: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:15 compute-0 ceph-mon[75358]: osdmap e117: 3 total, 3 up, 3 in
Dec 04 10:19:15 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 04 10:19:15 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 04 10:19:16 compute-0 ceph-mon[75358]: 11.2 scrub starts
Dec 04 10:19:16 compute-0 ceph-mon[75358]: 11.2 scrub ok
Dec 04 10:19:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:17 compute-0 ceph-mon[75358]: pgmap v253: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:18 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 04 10:19:18 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 04 10:19:18 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 04 10:19:18 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 04 10:19:18 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 04 10:19:18 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 04 10:19:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Dec 04 10:19:19 compute-0 sshd-session[103224]: Invalid user debian from 74.249.218.27 port 45460
Dec 04 10:19:19 compute-0 sshd-session[103224]: Received disconnect from 74.249.218.27 port 45460:11: Bye Bye [preauth]
Dec 04 10:19:19 compute-0 sshd-session[103224]: Disconnected from invalid user debian 74.249.218.27 port 45460 [preauth]
Dec 04 10:19:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:19 compute-0 ceph-mon[75358]: 11.b scrub starts
Dec 04 10:19:19 compute-0 ceph-mon[75358]: 11.b scrub ok
Dec 04 10:19:19 compute-0 ceph-mon[75358]: 7.3 scrub starts
Dec 04 10:19:19 compute-0 ceph-mon[75358]: 7.3 scrub ok
Dec 04 10:19:19 compute-0 ceph-mon[75358]: 6.b scrub starts
Dec 04 10:19:19 compute-0 ceph-mon[75358]: 6.b scrub ok
Dec 04 10:19:19 compute-0 ceph-mon[75358]: pgmap v254: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Dec 04 10:19:20 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 04 10:19:20 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 04 10:19:20 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 04 10:19:20 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 04 10:19:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 B/s, 1 objects/s recovering
Dec 04 10:19:21 compute-0 ceph-mon[75358]: 8.d scrub starts
Dec 04 10:19:21 compute-0 ceph-mon[75358]: 8.d scrub ok
Dec 04 10:19:21 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 04 10:19:21 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 04 10:19:22 compute-0 ceph-mon[75358]: 3.3 scrub starts
Dec 04 10:19:22 compute-0 ceph-mon[75358]: 3.3 scrub ok
Dec 04 10:19:22 compute-0 ceph-mon[75358]: pgmap v255: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 B/s, 1 objects/s recovering
Dec 04 10:19:22 compute-0 ceph-mon[75358]: 3.1 scrub starts
Dec 04 10:19:22 compute-0 ceph-mon[75358]: 3.1 scrub ok
Dec 04 10:19:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Dec 04 10:19:24 compute-0 ceph-mon[75358]: pgmap v256: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Dec 04 10:19:24 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 04 10:19:24 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 04 10:19:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Dec 04 10:19:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:25 compute-0 ceph-mon[75358]: 11.1f scrub starts
Dec 04 10:19:25 compute-0 ceph-mon[75358]: 11.1f scrub ok
Dec 04 10:19:26 compute-0 ceph-mon[75358]: pgmap v257: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Dec 04 10:19:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:19:26
Dec 04 10:19:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:19:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:19:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'vms']
Dec 04 10:19:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:19:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Dec 04 10:19:27 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec 04 10:19:27 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:19:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:19:28 compute-0 ceph-mon[75358]: pgmap v258: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Dec 04 10:19:28 compute-0 ceph-mon[75358]: 8.1c scrub starts
Dec 04 10:19:28 compute-0 ceph-mon[75358]: 8.1c scrub ok
Dec 04 10:19:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Dec 04 10:19:29 compute-0 ceph-mon[75358]: pgmap v259: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Dec 04 10:19:29 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 04 10:19:29 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 04 10:19:29 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 04 10:19:29 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 04 10:19:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:30 compute-0 ceph-mon[75358]: 11.9 scrub starts
Dec 04 10:19:30 compute-0 ceph-mon[75358]: 11.9 scrub ok
Dec 04 10:19:30 compute-0 ceph-mon[75358]: 7.18 scrub starts
Dec 04 10:19:30 compute-0 ceph-mon[75358]: 7.18 scrub ok
Dec 04 10:19:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 04 10:19:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 04 10:19:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:31 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 04 10:19:31 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 04 10:19:31 compute-0 ceph-mon[75358]: pgmap v260: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:32 compute-0 ceph-mon[75358]: 4.5 scrub starts
Dec 04 10:19:32 compute-0 ceph-mon[75358]: 4.5 scrub ok
Dec 04 10:19:32 compute-0 ceph-mon[75358]: 11.8 scrub starts
Dec 04 10:19:32 compute-0 ceph-mon[75358]: 11.8 scrub ok
Dec 04 10:19:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:33 compute-0 ceph-mon[75358]: pgmap v261: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:34 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 04 10:19:34 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 04 10:19:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:35 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 04 10:19:35 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 04 10:19:35 compute-0 ceph-mon[75358]: 5.15 scrub starts
Dec 04 10:19:35 compute-0 ceph-mon[75358]: 5.15 scrub ok
Dec 04 10:19:35 compute-0 ceph-mon[75358]: pgmap v262: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:36 compute-0 sudo[102462]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:36 compute-0 ceph-mon[75358]: 11.1a scrub starts
Dec 04 10:19:36 compute-0 ceph-mon[75358]: 11.1a scrub ok
Dec 04 10:19:36 compute-0 sudo[103375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsntlwudlkdaankllmqqfsfbktexsxww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843576.4598784-137-171020955785455/AnsiballZ_command.py'
Dec 04 10:19:36 compute-0 sudo[103375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:19:36 compute-0 python3.9[103377]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:19:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:19:37 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 04 10:19:37 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 04 10:19:37 compute-0 sudo[103375]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:37 compute-0 ceph-mon[75358]: pgmap v263: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:38 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 04 10:19:38 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 04 10:19:38 compute-0 sshd-session[103378]: Invalid user zjw from 103.179.218.243 port 41484
Dec 04 10:19:38 compute-0 sshd-session[103378]: Received disconnect from 103.179.218.243 port 41484:11: Bye Bye [preauth]
Dec 04 10:19:38 compute-0 sshd-session[103378]: Disconnected from invalid user zjw 103.179.218.243 port 41484 [preauth]
Dec 04 10:19:38 compute-0 sudo[103664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzcijxdjjqumknpegoeplyrcpexflllx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843577.8749933-145-14081286436131/AnsiballZ_selinux.py'
Dec 04 10:19:38 compute-0 sudo[103664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 04 10:19:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 04 10:19:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:38 compute-0 ceph-mon[75358]: 11.12 scrub starts
Dec 04 10:19:38 compute-0 ceph-mon[75358]: 11.12 scrub ok
Dec 04 10:19:38 compute-0 python3.9[103666]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 04 10:19:38 compute-0 sudo[103664]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:39 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 04 10:19:39 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 04 10:19:39 compute-0 sudo[103816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuibultihzqdtvrlplldflfacsqfosfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843579.2148824-156-34812755587384/AnsiballZ_command.py'
Dec 04 10:19:39 compute-0 sudo[103816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 04 10:19:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 04 10:19:39 compute-0 python3.9[103818]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 04 10:19:39 compute-0 sudo[103816]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:39 compute-0 ceph-mon[75358]: 11.3 scrub starts
Dec 04 10:19:39 compute-0 ceph-mon[75358]: 11.3 scrub ok
Dec 04 10:19:39 compute-0 ceph-mon[75358]: 4.7 scrub starts
Dec 04 10:19:39 compute-0 ceph-mon[75358]: 4.7 scrub ok
Dec 04 10:19:39 compute-0 ceph-mon[75358]: pgmap v264: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:40 compute-0 sudo[103968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dokkfquratpfszlldryuhajxwkgfaiwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843579.8917675-164-58783251240641/AnsiballZ_file.py'
Dec 04 10:19:40 compute-0 sudo[103968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:40 compute-0 python3.9[103970]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:19:40 compute-0 sudo[103968]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:40 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 04 10:19:40 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 04 10:19:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:40 compute-0 ceph-mon[75358]: 8.12 scrub starts
Dec 04 10:19:40 compute-0 ceph-mon[75358]: 8.12 scrub ok
Dec 04 10:19:40 compute-0 ceph-mon[75358]: 6.1 scrub starts
Dec 04 10:19:40 compute-0 ceph-mon[75358]: 6.1 scrub ok
Dec 04 10:19:41 compute-0 sudo[104120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohgczrzbbcdllmrrwisgfvdinfbpcbqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843580.5818663-172-156272397981213/AnsiballZ_mount.py'
Dec 04 10:19:41 compute-0 sudo[104120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:41 compute-0 python3.9[104122]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 04 10:19:41 compute-0 sudo[104120]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:41 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 04 10:19:41 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 04 10:19:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 04 10:19:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 04 10:19:41 compute-0 systemd[76741]: Created slice User Background Tasks Slice.
Dec 04 10:19:41 compute-0 systemd[76741]: Starting Cleanup of User's Temporary Files and Directories...
Dec 04 10:19:41 compute-0 systemd[76741]: Finished Cleanup of User's Temporary Files and Directories.
Dec 04 10:19:42 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 04 10:19:42 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 04 10:19:42 compute-0 ceph-mon[75358]: 3.c scrub starts
Dec 04 10:19:42 compute-0 ceph-mon[75358]: 3.c scrub ok
Dec 04 10:19:42 compute-0 ceph-mon[75358]: pgmap v265: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:42 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 04 10:19:42 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 04 10:19:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:42 compute-0 sudo[104273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtulmcfaoieimndzlkqvwmxhbffwgptf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843582.6027367-200-25160339324211/AnsiballZ_file.py'
Dec 04 10:19:42 compute-0 sudo[104273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:43 compute-0 python3.9[104275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:19:43 compute-0 sudo[104273]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:43 compute-0 sudo[104425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hknhppptuhngpgsasypdvnjunmliixhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843583.2330346-208-244079596230550/AnsiballZ_stat.py'
Dec 04 10:19:43 compute-0 sudo[104425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:43 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 04 10:19:43 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 04 10:19:43 compute-0 ceph-mon[75358]: 7.4 scrub starts
Dec 04 10:19:43 compute-0 ceph-mon[75358]: 7.4 scrub ok
Dec 04 10:19:43 compute-0 ceph-mon[75358]: 6.4 scrub starts
Dec 04 10:19:43 compute-0 ceph-mon[75358]: 6.4 scrub ok
Dec 04 10:19:43 compute-0 ceph-mon[75358]: 11.15 scrub starts
Dec 04 10:19:43 compute-0 ceph-mon[75358]: 11.15 scrub ok
Dec 04 10:19:43 compute-0 ceph-mon[75358]: 3.1b scrub starts
Dec 04 10:19:43 compute-0 ceph-mon[75358]: 3.1b scrub ok
Dec 04 10:19:43 compute-0 ceph-mon[75358]: pgmap v266: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:43 compute-0 python3.9[104427]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:19:43 compute-0 sudo[104425]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:43 compute-0 sudo[104503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzncukvmkpuykubcfvilkuwctjelbmnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843583.2330346-208-244079596230550/AnsiballZ_file.py'
Dec 04 10:19:43 compute-0 sudo[104503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:44 compute-0 python3.9[104505]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:19:44 compute-0 sudo[104503]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:44 compute-0 ceph-mon[75358]: 7.1f scrub starts
Dec 04 10:19:44 compute-0 ceph-mon[75358]: 7.1f scrub ok
Dec 04 10:19:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:44 compute-0 sudo[104655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjunpumsliumdvnstqdgsrcviwnyhht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843584.571498-229-207332562644407/AnsiballZ_stat.py'
Dec 04 10:19:44 compute-0 sudo[104655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:45 compute-0 python3.9[104657]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:19:45 compute-0 sudo[104655]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 04 10:19:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 04 10:19:45 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 04 10:19:45 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 04 10:19:45 compute-0 ceph-mon[75358]: pgmap v267: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:46 compute-0 sudo[104809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mshhjianekdzffvtzxxzizxjhbupxklt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843585.6675003-242-223290854488483/AnsiballZ_getent.py'
Dec 04 10:19:46 compute-0 sudo[104809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:46 compute-0 python3.9[104811]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 04 10:19:46 compute-0 sudo[104809]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:46 compute-0 ceph-mon[75358]: 5.1d scrub starts
Dec 04 10:19:46 compute-0 ceph-mon[75358]: 5.1d scrub ok
Dec 04 10:19:46 compute-0 ceph-mon[75358]: 7.f scrub starts
Dec 04 10:19:46 compute-0 ceph-mon[75358]: 7.f scrub ok
Dec 04 10:19:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:46 compute-0 sudo[104962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvkdydqkjvzfxyphvyafbvxcohicmvcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843586.5323284-252-95368468058408/AnsiballZ_getent.py'
Dec 04 10:19:46 compute-0 sudo[104962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:46 compute-0 python3.9[104964]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 04 10:19:47 compute-0 sudo[104962]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:47 compute-0 sudo[105115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bueolfldnrjzxbymjzaxexzcdhxxaygc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843587.1616788-260-144743685010242/AnsiballZ_group.py'
Dec 04 10:19:47 compute-0 sudo[105115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:47 compute-0 ceph-mon[75358]: pgmap v268: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:47 compute-0 python3.9[105117]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 04 10:19:47 compute-0 sudo[105115]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:48 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 04 10:19:48 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 04 10:19:48 compute-0 sudo[105267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvyjdecffnfaqhxfpaqsclfrhhufgpus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843588.0412118-269-74287461591115/AnsiballZ_file.py'
Dec 04 10:19:48 compute-0 sudo[105267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:48 compute-0 python3.9[105269]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 04 10:19:48 compute-0 sudo[105267]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:48 compute-0 ceph-mon[75358]: 11.11 scrub starts
Dec 04 10:19:48 compute-0 ceph-mon[75358]: 11.11 scrub ok
Dec 04 10:19:49 compute-0 sudo[105419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfmezmrscyqsoxbroclhrpifoqqfihvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843588.8347814-280-198481226204182/AnsiballZ_dnf.py'
Dec 04 10:19:49 compute-0 sudo[105419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:49 compute-0 python3.9[105421]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:19:49 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 04 10:19:49 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 04 10:19:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:49 compute-0 ceph-mon[75358]: pgmap v269: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:50 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 04 10:19:50 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 04 10:19:50 compute-0 sudo[105419]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:50 compute-0 ceph-mon[75358]: 4.4 scrub starts
Dec 04 10:19:50 compute-0 ceph-mon[75358]: 4.4 scrub ok
Dec 04 10:19:50 compute-0 ceph-mon[75358]: 11.1e scrub starts
Dec 04 10:19:50 compute-0 ceph-mon[75358]: 11.1e scrub ok
Dec 04 10:19:51 compute-0 sudo[105572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhlwgatxlgixjqndrjrohqlrpcapxfbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843590.8510294-288-155768089639505/AnsiballZ_file.py'
Dec 04 10:19:51 compute-0 sudo[105572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:51 compute-0 python3.9[105574]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:19:51 compute-0 sudo[105572]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:51 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 04 10:19:51 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 04 10:19:51 compute-0 sudo[105724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocgadflzxiywvvtodehtngncboldukjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843591.4826553-296-76681880387007/AnsiballZ_stat.py'
Dec 04 10:19:51 compute-0 sudo[105724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:51 compute-0 python3.9[105726]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:19:51 compute-0 ceph-mon[75358]: pgmap v270: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:51 compute-0 sudo[105724]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:52 compute-0 sudo[105802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krucqllfgyzjngjybxyxtochflqexlmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843591.4826553-296-76681880387007/AnsiballZ_file.py'
Dec 04 10:19:52 compute-0 sudo[105802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:52 compute-0 python3.9[105804]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:19:52 compute-0 sudo[105802]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:52 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 04 10:19:52 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 04 10:19:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:52 compute-0 sudo[105954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oldzresziewjkcmxcbfeqepffixuzbid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843592.5554588-309-66163742789030/AnsiballZ_stat.py'
Dec 04 10:19:52 compute-0 sudo[105954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:52 compute-0 ceph-mon[75358]: 7.9 scrub starts
Dec 04 10:19:52 compute-0 ceph-mon[75358]: 7.9 scrub ok
Dec 04 10:19:52 compute-0 python3.9[105956]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:19:53 compute-0 sudo[105954]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:53 compute-0 sudo[106032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwroegfxaciwadbtbdklrninzkqocfeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843592.5554588-309-66163742789030/AnsiballZ_file.py'
Dec 04 10:19:53 compute-0 sudo[106032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:53 compute-0 python3.9[106034]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:19:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 04 10:19:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 04 10:19:53 compute-0 sudo[106032]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:53 compute-0 ceph-mon[75358]: 5.14 scrub starts
Dec 04 10:19:53 compute-0 ceph-mon[75358]: 5.14 scrub ok
Dec 04 10:19:53 compute-0 ceph-mon[75358]: pgmap v271: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:54 compute-0 sudo[106184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbgydxffgygidkdubstjatvhvvalhgma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843593.7532628-324-124269798275446/AnsiballZ_dnf.py'
Dec 04 10:19:54 compute-0 sudo[106184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:54 compute-0 python3.9[106186]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:19:54 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 04 10:19:54 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 04 10:19:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:54 compute-0 ceph-mon[75358]: 6.6 scrub starts
Dec 04 10:19:54 compute-0 ceph-mon[75358]: 6.6 scrub ok
Dec 04 10:19:54 compute-0 ceph-mon[75358]: 7.6 scrub starts
Dec 04 10:19:54 compute-0 ceph-mon[75358]: 7.6 scrub ok
Dec 04 10:19:55 compute-0 sudo[106184]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:56 compute-0 ceph-mon[75358]: pgmap v272: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:56 compute-0 python3.9[106339]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:19:56 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 04 10:19:56 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 04 10:19:56 compute-0 sshd-session[106188]: Invalid user work from 103.149.86.230 port 56880
Dec 04 10:19:56 compute-0 sshd-session[106188]: Received disconnect from 103.149.86.230 port 56880:11: Bye Bye [preauth]
Dec 04 10:19:56 compute-0 sshd-session[106188]: Disconnected from invalid user work 103.149.86.230 port 56880 [preauth]
Dec 04 10:19:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:57 compute-0 ceph-mon[75358]: 3.f scrub starts
Dec 04 10:19:57 compute-0 ceph-mon[75358]: 3.f scrub ok
Dec 04 10:19:57 compute-0 python3.9[106491]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 04 10:19:57 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 04 10:19:57 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 04 10:19:57 compute-0 python3.9[106641]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:19:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:19:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:19:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:19:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:19:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:19:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:19:58 compute-0 ceph-mon[75358]: pgmap v273: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:58 compute-0 ceph-mon[75358]: 8.14 scrub starts
Dec 04 10:19:58 compute-0 ceph-mon[75358]: 8.14 scrub ok
Dec 04 10:19:58 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 04 10:19:58 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 04 10:19:58 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 04 10:19:58 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 04 10:19:58 compute-0 sudo[106791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqltjzbcxmzepzjsimdoqobdebnuibch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843598.0396087-365-115613016360134/AnsiballZ_systemd.py'
Dec 04 10:19:58 compute-0 sudo[106791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:19:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:19:58 compute-0 python3.9[106793]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:19:58 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 04 10:19:59 compute-0 ceph-mon[75358]: 11.1c scrub starts
Dec 04 10:19:59 compute-0 ceph-mon[75358]: 11.1c scrub ok
Dec 04 10:19:59 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 04 10:19:59 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 04 10:19:59 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 04 10:19:59 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 04 10:19:59 compute-0 sudo[106791]: pam_unix(sudo:session): session closed for user root
Dec 04 10:19:59 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec 04 10:19:59 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec 04 10:19:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:19:59 compute-0 python3.9[106954]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 04 10:20:00 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 04 10:20:00 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 04 10:20:00 compute-0 ceph-mon[75358]: 4.2 scrub starts
Dec 04 10:20:00 compute-0 ceph-mon[75358]: 4.2 scrub ok
Dec 04 10:20:00 compute-0 ceph-mon[75358]: pgmap v274: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:00 compute-0 ceph-mon[75358]: 11.17 scrub starts
Dec 04 10:20:00 compute-0 ceph-mon[75358]: 11.17 scrub ok
Dec 04 10:20:00 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 04 10:20:00 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 04 10:20:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:01 compute-0 ceph-mon[75358]: 11.18 scrub starts
Dec 04 10:20:01 compute-0 ceph-mon[75358]: 11.18 scrub ok
Dec 04 10:20:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 04 10:20:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 04 10:20:01 compute-0 sudo[107104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmqzyutvbaozbycrfsfnpfgvkpiyqbtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843601.5291376-422-259802485489805/AnsiballZ_systemd.py'
Dec 04 10:20:01 compute-0 sudo[107104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:02 compute-0 python3.9[107106]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:20:02 compute-0 ceph-mon[75358]: 4.12 scrub starts
Dec 04 10:20:02 compute-0 ceph-mon[75358]: 4.12 scrub ok
Dec 04 10:20:02 compute-0 ceph-mon[75358]: pgmap v275: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:02 compute-0 sudo[107104]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:02 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 04 10:20:02 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 04 10:20:02 compute-0 sudo[107258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sifkdvivzhntzzfmgoqhkshwjfiairgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843602.3013163-422-222870460935720/AnsiballZ_systemd.py'
Dec 04 10:20:02 compute-0 sudo[107258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:02 compute-0 python3.9[107260]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:20:02 compute-0 sudo[107258]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:03 compute-0 ceph-mon[75358]: 6.d scrub starts
Dec 04 10:20:03 compute-0 ceph-mon[75358]: 6.d scrub ok
Dec 04 10:20:03 compute-0 ceph-mon[75358]: pgmap v276: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:03 compute-0 sshd-session[100546]: Connection closed by 192.168.122.30 port 45756
Dec 04 10:20:03 compute-0 sshd-session[100543]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:20:03 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 04 10:20:03 compute-0 systemd[1]: session-35.scope: Consumed 1min 6.029s CPU time.
Dec 04 10:20:03 compute-0 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Dec 04 10:20:03 compute-0 systemd-logind[798]: Removed session 35.
Dec 04 10:20:03 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 04 10:20:03 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 04 10:20:04 compute-0 ceph-mon[75358]: 4.f scrub starts
Dec 04 10:20:04 compute-0 ceph-mon[75358]: 4.f scrub ok
Dec 04 10:20:04 compute-0 ceph-mon[75358]: 11.14 scrub starts
Dec 04 10:20:04 compute-0 ceph-mon[75358]: 11.14 scrub ok
Dec 04 10:20:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 04 10:20:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 04 10:20:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:05 compute-0 ceph-mon[75358]: pgmap v277: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec 04 10:20:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec 04 10:20:06 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 04 10:20:06 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 04 10:20:06 compute-0 ceph-mon[75358]: 4.d scrub starts
Dec 04 10:20:06 compute-0 ceph-mon[75358]: 4.d scrub ok
Dec 04 10:20:06 compute-0 ceph-mon[75358]: 6.1e scrub starts
Dec 04 10:20:06 compute-0 ceph-mon[75358]: 6.1e scrub ok
Dec 04 10:20:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:07 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 04 10:20:07 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 04 10:20:07 compute-0 ceph-mon[75358]: 11.1b scrub starts
Dec 04 10:20:07 compute-0 ceph-mon[75358]: 11.1b scrub ok
Dec 04 10:20:07 compute-0 ceph-mon[75358]: pgmap v278: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:08 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 04 10:20:08 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 04 10:20:08 compute-0 ceph-mon[75358]: 8.1b scrub starts
Dec 04 10:20:08 compute-0 ceph-mon[75358]: 8.1b scrub ok
Dec 04 10:20:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:09 compute-0 sshd-session[107287]: Accepted publickey for zuul from 192.168.122.30 port 44858 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:20:09 compute-0 systemd-logind[798]: New session 36 of user zuul.
Dec 04 10:20:09 compute-0 systemd[1]: Started Session 36 of User zuul.
Dec 04 10:20:09 compute-0 sshd-session[107287]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:20:09 compute-0 ceph-mon[75358]: 8.4 scrub starts
Dec 04 10:20:09 compute-0 ceph-mon[75358]: 8.4 scrub ok
Dec 04 10:20:09 compute-0 ceph-mon[75358]: pgmap v279: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:09 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 04 10:20:09 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 04 10:20:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:10 compute-0 python3.9[107440]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:20:10 compute-0 ceph-mon[75358]: 6.2 scrub starts
Dec 04 10:20:10 compute-0 ceph-mon[75358]: 6.2 scrub ok
Dec 04 10:20:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:11 compute-0 sudo[107594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmpbdjlzokqxwblsgnmmraqpabykuxsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843610.5961425-36-208836965068028/AnsiballZ_getent.py'
Dec 04 10:20:11 compute-0 sudo[107594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:11 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 04 10:20:11 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 04 10:20:11 compute-0 sudo[107597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:20:11 compute-0 sudo[107597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:20:11 compute-0 sudo[107597]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:11 compute-0 python3.9[107596]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 04 10:20:11 compute-0 sudo[107594]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:11 compute-0 sudo[107622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:20:11 compute-0 sudo[107622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:20:11 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 04 10:20:11 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 04 10:20:11 compute-0 ceph-mon[75358]: pgmap v280: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:11 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 04 10:20:11 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 04 10:20:11 compute-0 sudo[107622]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:20:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:20:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:20:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:20:11 compute-0 sudo[107828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siekeythrbfzjrviyaoemxowelogeyin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843611.4857397-48-240687009182646/AnsiballZ_setup.py'
Dec 04 10:20:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:20:11 compute-0 sudo[107828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:20:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:20:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:20:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:20:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:20:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:20:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:20:11 compute-0 sudo[107831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:20:11 compute-0 sudo[107831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:20:11 compute-0 sudo[107831]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:11 compute-0 sudo[107856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:20:11 compute-0 sudo[107856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:20:12 compute-0 python3.9[107830]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:20:12 compute-0 podman[107898]: 2025-12-04 10:20:12.124610669 +0000 UTC m=+0.021783335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:20:12 compute-0 podman[107898]: 2025-12-04 10:20:12.23266565 +0000 UTC m=+0.129838296 container create 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:20:12 compute-0 systemd[1]: Started libpod-conmon-14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645.scope.
Dec 04 10:20:12 compute-0 sudo[107828]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:12 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:20:12 compute-0 podman[107898]: 2025-12-04 10:20:12.318428786 +0000 UTC m=+0.215601462 container init 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:20:12 compute-0 podman[107898]: 2025-12-04 10:20:12.326839929 +0000 UTC m=+0.224012575 container start 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:20:12 compute-0 sweet_borg[107918]: 167 167
Dec 04 10:20:12 compute-0 systemd[1]: libpod-14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645.scope: Deactivated successfully.
Dec 04 10:20:12 compute-0 conmon[107918]: conmon 14aea3ac1860fa37c5cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645.scope/container/memory.events
Dec 04 10:20:12 compute-0 podman[107898]: 2025-12-04 10:20:12.345514305 +0000 UTC m=+0.242686951 container attach 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:20:12 compute-0 podman[107898]: 2025-12-04 10:20:12.348595563 +0000 UTC m=+0.245768209 container died 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ddc2746033e647a4b9df2101c0d2fb0c30b90c65e7a5daa0e5701d4b64152dc-merged.mount: Deactivated successfully.
Dec 04 10:20:12 compute-0 podman[107898]: 2025-12-04 10:20:12.404005288 +0000 UTC m=+0.301177934 container remove 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:20:12 compute-0 systemd[1]: libpod-conmon-14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645.scope: Deactivated successfully.
Dec 04 10:20:12 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec 04 10:20:12 compute-0 ceph-mon[75358]: 11.d scrub starts
Dec 04 10:20:12 compute-0 ceph-mon[75358]: 11.d scrub ok
Dec 04 10:20:12 compute-0 ceph-mon[75358]: 6.c scrub starts
Dec 04 10:20:12 compute-0 ceph-mon[75358]: 6.c scrub ok
Dec 04 10:20:12 compute-0 ceph-mon[75358]: 11.f scrub starts
Dec 04 10:20:12 compute-0 ceph-mon[75358]: 11.f scrub ok
Dec 04 10:20:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:20:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:20:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:20:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:20:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:20:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:20:12 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec 04 10:20:12 compute-0 podman[107944]: 2025-12-04 10:20:12.555441642 +0000 UTC m=+0.040224055 container create f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:20:12 compute-0 systemd[1]: Started libpod-conmon-f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589.scope.
Dec 04 10:20:12 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:12 compute-0 podman[107944]: 2025-12-04 10:20:12.632167142 +0000 UTC m=+0.116949565 container init f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 04 10:20:12 compute-0 podman[107944]: 2025-12-04 10:20:12.5387775 +0000 UTC m=+0.023559943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:20:12 compute-0 podman[107944]: 2025-12-04 10:20:12.642110098 +0000 UTC m=+0.126892681 container start f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:20:12 compute-0 podman[107944]: 2025-12-04 10:20:12.646018933 +0000 UTC m=+0.130801366 container attach f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:20:12 compute-0 sudo[108039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukxpxnfwnouediazhyfpryfpuhcgloxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843611.4857397-48-240687009182646/AnsiballZ_dnf.py'
Dec 04 10:20:12 compute-0 sudo[108039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:12 compute-0 python3.9[108041]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 04 10:20:13 compute-0 mystifying_chatterjee[107984]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:20:13 compute-0 mystifying_chatterjee[107984]: --> All data devices are unavailable
Dec 04 10:20:13 compute-0 systemd[1]: libpod-f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589.scope: Deactivated successfully.
Dec 04 10:20:13 compute-0 podman[107944]: 2025-12-04 10:20:13.146168325 +0000 UTC m=+0.630950748 container died f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243-merged.mount: Deactivated successfully.
Dec 04 10:20:13 compute-0 podman[107944]: 2025-12-04 10:20:13.200872955 +0000 UTC m=+0.685655368 container remove f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:20:13 compute-0 systemd[1]: libpod-conmon-f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589.scope: Deactivated successfully.
Dec 04 10:20:13 compute-0 sudo[107856]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:13 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 04 10:20:13 compute-0 sudo[108070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:20:13 compute-0 sudo[108070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:20:13 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 04 10:20:13 compute-0 sudo[108070]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:13 compute-0 sudo[108095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:20:13 compute-0 sudo[108095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:20:13 compute-0 ceph-mon[75358]: 8.c scrub starts
Dec 04 10:20:13 compute-0 ceph-mon[75358]: 8.c scrub ok
Dec 04 10:20:13 compute-0 ceph-mon[75358]: pgmap v281: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:13 compute-0 podman[108132]: 2025-12-04 10:20:13.634150183 +0000 UTC m=+0.037740173 container create c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:20:13 compute-0 systemd[1]: Started libpod-conmon-c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4.scope.
Dec 04 10:20:13 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:20:13 compute-0 podman[108132]: 2025-12-04 10:20:13.706082387 +0000 UTC m=+0.109672397 container init c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:20:13 compute-0 podman[108132]: 2025-12-04 10:20:13.711800382 +0000 UTC m=+0.115390372 container start c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:20:13 compute-0 podman[108132]: 2025-12-04 10:20:13.618310147 +0000 UTC m=+0.021900157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:20:13 compute-0 podman[108132]: 2025-12-04 10:20:13.715192116 +0000 UTC m=+0.118782106 container attach c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:20:13 compute-0 dreamy_nobel[108149]: 167 167
Dec 04 10:20:13 compute-0 systemd[1]: libpod-c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4.scope: Deactivated successfully.
Dec 04 10:20:13 compute-0 podman[108132]: 2025-12-04 10:20:13.717655299 +0000 UTC m=+0.121245289 container died c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d37b212a4534b3625764d27069f54df78bacb44ff52f393f60541bb3e83ff6-merged.mount: Deactivated successfully.
Dec 04 10:20:13 compute-0 podman[108132]: 2025-12-04 10:20:13.753948488 +0000 UTC m=+0.157538478 container remove c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:20:13 compute-0 systemd[1]: libpod-conmon-c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4.scope: Deactivated successfully.
Dec 04 10:20:13 compute-0 podman[108173]: 2025-12-04 10:20:13.889770943 +0000 UTC m=+0.037877064 container create 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:20:13 compute-0 systemd[1]: Started libpod-conmon-1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f.scope.
Dec 04 10:20:13 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:13 compute-0 podman[108173]: 2025-12-04 10:20:13.872636471 +0000 UTC m=+0.020742612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:20:13 compute-0 podman[108173]: 2025-12-04 10:20:13.976151723 +0000 UTC m=+0.124257864 container init 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:20:13 compute-0 podman[108173]: 2025-12-04 10:20:13.984448434 +0000 UTC m=+0.132554565 container start 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:20:13 compute-0 podman[108173]: 2025-12-04 10:20:13.988273297 +0000 UTC m=+0.136379448 container attach 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]: {
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:     "0": [
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:         {
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "devices": [
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "/dev/loop3"
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             ],
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_name": "ceph_lv0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_size": "21470642176",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "name": "ceph_lv0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "tags": {
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.cluster_name": "ceph",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.crush_device_class": "",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.encrypted": "0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.objectstore": "bluestore",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.osd_id": "0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.type": "block",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.vdo": "0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.with_tpm": "0"
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             },
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "type": "block",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "vg_name": "ceph_vg0"
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:         }
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:     ],
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:     "1": [
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:         {
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "devices": [
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "/dev/loop4"
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             ],
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_name": "ceph_lv1",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_size": "21470642176",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "name": "ceph_lv1",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "tags": {
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.cluster_name": "ceph",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.crush_device_class": "",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.encrypted": "0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.objectstore": "bluestore",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.osd_id": "1",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.type": "block",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.vdo": "0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.with_tpm": "0"
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             },
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "type": "block",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "vg_name": "ceph_vg1"
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:         }
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:     ],
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:     "2": [
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:         {
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "devices": [
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "/dev/loop5"
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             ],
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_name": "ceph_lv2",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_size": "21470642176",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "name": "ceph_lv2",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "tags": {
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.cluster_name": "ceph",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.crush_device_class": "",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.encrypted": "0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.objectstore": "bluestore",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.osd_id": "2",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.type": "block",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.vdo": "0",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:                 "ceph.with_tpm": "0"
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             },
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "type": "block",
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:             "vg_name": "ceph_vg2"
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:         }
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]:     ]
Dec 04 10:20:14 compute-0 pensive_lamarr[108190]: }
Dec 04 10:20:14 compute-0 systemd[1]: libpod-1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f.scope: Deactivated successfully.
Dec 04 10:20:14 compute-0 podman[108173]: 2025-12-04 10:20:14.305954469 +0000 UTC m=+0.454060600 container died 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44-merged.mount: Deactivated successfully.
Dec 04 10:20:14 compute-0 sudo[108039]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:14 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 04 10:20:14 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 04 10:20:14 compute-0 podman[108173]: 2025-12-04 10:20:14.353014393 +0000 UTC m=+0.501120534 container remove 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:20:14 compute-0 systemd[1]: libpod-conmon-1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f.scope: Deactivated successfully.
Dec 04 10:20:14 compute-0 sudo[108095]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:14 compute-0 ceph-mon[75358]: 2.9 scrub starts
Dec 04 10:20:14 compute-0 ceph-mon[75358]: 2.9 scrub ok
Dec 04 10:20:14 compute-0 sudo[108236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:20:14 compute-0 sudo[108236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:20:14 compute-0 sudo[108236]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:14 compute-0 sudo[108261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:20:14 compute-0 sudo[108261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:20:14 compute-0 podman[108396]: 2025-12-04 10:20:14.791296708 +0000 UTC m=+0.051612204 container create 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:20:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:14 compute-0 sudo[108436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-funptpathhbkuapvvwspfplxsvrdvsrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843614.5412254-62-25239058960480/AnsiballZ_dnf.py'
Dec 04 10:20:14 compute-0 sudo[108436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:14 compute-0 systemd[1]: Started libpod-conmon-6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3.scope.
Dec 04 10:20:14 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:20:14 compute-0 podman[108396]: 2025-12-04 10:20:14.772229474 +0000 UTC m=+0.032544990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:20:14 compute-0 podman[108396]: 2025-12-04 10:20:14.867804553 +0000 UTC m=+0.128120069 container init 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 04 10:20:14 compute-0 podman[108396]: 2025-12-04 10:20:14.876239197 +0000 UTC m=+0.136554693 container start 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:20:14 compute-0 podman[108396]: 2025-12-04 10:20:14.879829644 +0000 UTC m=+0.140145140 container attach 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:20:14 compute-0 upbeat_beaver[108441]: 167 167
Dec 04 10:20:14 compute-0 systemd[1]: libpod-6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3.scope: Deactivated successfully.
Dec 04 10:20:14 compute-0 conmon[108441]: conmon 6125472c58092f14a863 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3.scope/container/memory.events
Dec 04 10:20:14 compute-0 podman[108396]: 2025-12-04 10:20:14.884616109 +0000 UTC m=+0.144931625 container died 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe1a2f56fbfbf9ec39cecab50548d1eebb3d066a4100692ebc4906578ae04d54-merged.mount: Deactivated successfully.
Dec 04 10:20:14 compute-0 podman[108396]: 2025-12-04 10:20:14.932039341 +0000 UTC m=+0.192354847 container remove 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:20:14 compute-0 systemd[1]: libpod-conmon-6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3.scope: Deactivated successfully.
Dec 04 10:20:15 compute-0 python3.9[108438]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:20:15 compute-0 podman[108465]: 2025-12-04 10:20:15.076395102 +0000 UTC m=+0.034699257 container create 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:20:15 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 04 10:20:15 compute-0 systemd[1]: Started libpod-conmon-6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd.scope.
Dec 04 10:20:15 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 04 10:20:15 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:20:15 compute-0 podman[108465]: 2025-12-04 10:20:15.060429994 +0000 UTC m=+0.018734169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:20:15 compute-0 podman[108465]: 2025-12-04 10:20:15.164066888 +0000 UTC m=+0.122371053 container init 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:20:15 compute-0 podman[108465]: 2025-12-04 10:20:15.170599911 +0000 UTC m=+0.128904066 container start 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:20:15 compute-0 podman[108465]: 2025-12-04 10:20:15.173664947 +0000 UTC m=+0.131969122 container attach 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:20:15 compute-0 ceph-mon[75358]: 11.16 scrub starts
Dec 04 10:20:15 compute-0 ceph-mon[75358]: 11.16 scrub ok
Dec 04 10:20:15 compute-0 ceph-mon[75358]: pgmap v282: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:15 compute-0 lvm[108561]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:20:15 compute-0 lvm[108562]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:20:15 compute-0 lvm[108562]: VG ceph_vg1 finished
Dec 04 10:20:15 compute-0 lvm[108561]: VG ceph_vg0 finished
Dec 04 10:20:15 compute-0 lvm[108564]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:20:15 compute-0 lvm[108564]: VG ceph_vg2 finished
Dec 04 10:20:15 compute-0 thirsty_wescoff[108483]: {}
Dec 04 10:20:15 compute-0 systemd[1]: libpod-6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd.scope: Deactivated successfully.
Dec 04 10:20:15 compute-0 systemd[1]: libpod-6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd.scope: Consumed 1.362s CPU time.
Dec 04 10:20:15 compute-0 podman[108465]: 2025-12-04 10:20:15.964458343 +0000 UTC m=+0.922762508 container died 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:20:16 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 04 10:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3-merged.mount: Deactivated successfully.
Dec 04 10:20:16 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 04 10:20:16 compute-0 podman[108465]: 2025-12-04 10:20:16.114570889 +0000 UTC m=+1.072875044 container remove 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:20:16 compute-0 systemd[1]: libpod-conmon-6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd.scope: Deactivated successfully.
Dec 04 10:20:16 compute-0 sudo[108261]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:20:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:20:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:20:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:20:16 compute-0 sudo[108580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:20:16 compute-0 sudo[108580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:20:16 compute-0 sudo[108580]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:16 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec 04 10:20:16 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec 04 10:20:16 compute-0 ceph-mon[75358]: 8.11 scrub starts
Dec 04 10:20:16 compute-0 ceph-mon[75358]: 8.11 scrub ok
Dec 04 10:20:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:20:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:20:16 compute-0 sudo[108436]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:17 compute-0 sudo[108754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcufhafscgturwbimxpqfsfjxzmalsfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843616.6228619-70-67872427647688/AnsiballZ_systemd.py'
Dec 04 10:20:17 compute-0 sudo[108754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:17 compute-0 ceph-mon[75358]: 9.8 scrub starts
Dec 04 10:20:17 compute-0 ceph-mon[75358]: 9.8 scrub ok
Dec 04 10:20:17 compute-0 ceph-mon[75358]: 11.e scrub starts
Dec 04 10:20:17 compute-0 ceph-mon[75358]: 11.e scrub ok
Dec 04 10:20:17 compute-0 ceph-mon[75358]: pgmap v283: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:17 compute-0 python3.9[108756]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 04 10:20:17 compute-0 sudo[108754]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:18 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec 04 10:20:18 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec 04 10:20:18 compute-0 python3.9[108910]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:20:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:19 compute-0 ceph-mon[75358]: 10.1 scrub starts
Dec 04 10:20:19 compute-0 ceph-mon[75358]: 10.1 scrub ok
Dec 04 10:20:19 compute-0 sudo[109060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djehopfbttzydhbpdhnvqlqlnmoddzps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843618.6856034-88-56309090400557/AnsiballZ_sefcontext.py'
Dec 04 10:20:19 compute-0 sudo[109060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:19 compute-0 python3.9[109062]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 04 10:20:19 compute-0 sudo[109060]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:20 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 04 10:20:20 compute-0 ceph-mon[75358]: pgmap v284: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:20 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 04 10:20:20 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 04 10:20:20 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 04 10:20:20 compute-0 python3.9[109212]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:20:20 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec 04 10:20:20 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec 04 10:20:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:21 compute-0 sudo[109368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdspbxsgehwmpvknpmqzbqyqygnqiwuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843620.8014965-106-113148689112621/AnsiballZ_dnf.py'
Dec 04 10:20:21 compute-0 sudo[109368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:21 compute-0 ceph-mon[75358]: 9.e scrub starts
Dec 04 10:20:21 compute-0 ceph-mon[75358]: 9.e scrub ok
Dec 04 10:20:21 compute-0 ceph-mon[75358]: 8.1f scrub starts
Dec 04 10:20:21 compute-0 ceph-mon[75358]: 8.1f scrub ok
Dec 04 10:20:21 compute-0 python3.9[109370]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:20:21 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 04 10:20:21 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 04 10:20:22 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 04 10:20:22 compute-0 ceph-mon[75358]: 8.16 scrub starts
Dec 04 10:20:22 compute-0 ceph-mon[75358]: 8.16 scrub ok
Dec 04 10:20:22 compute-0 ceph-mon[75358]: pgmap v285: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:22 compute-0 ceph-mon[75358]: 8.1d scrub starts
Dec 04 10:20:22 compute-0 ceph-mon[75358]: 8.1d scrub ok
Dec 04 10:20:22 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 04 10:20:22 compute-0 sudo[109368]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:23 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 04 10:20:23 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 04 10:20:23 compute-0 ceph-mon[75358]: 9.18 scrub starts
Dec 04 10:20:23 compute-0 ceph-mon[75358]: 9.18 scrub ok
Dec 04 10:20:23 compute-0 ceph-mon[75358]: pgmap v286: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:23 compute-0 sudo[109521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-minuknlznnqzkllylqawqcolqydkxdgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843622.8201065-114-142868611618009/AnsiballZ_command.py'
Dec 04 10:20:23 compute-0 sudo[109521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:23 compute-0 python3.9[109523]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:20:23 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 04 10:20:23 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 04 10:20:24 compute-0 ceph-mon[75358]: 9.13 scrub starts
Dec 04 10:20:24 compute-0 ceph-mon[75358]: 9.13 scrub ok
Dec 04 10:20:24 compute-0 ceph-mon[75358]: 8.1a scrub starts
Dec 04 10:20:24 compute-0 ceph-mon[75358]: 8.1a scrub ok
Dec 04 10:20:24 compute-0 sudo[109521]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 04 10:20:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 04 10:20:24 compute-0 sudo[109808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vywenxywqqkvimwlsdzinzyqltlrmwkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843624.3020592-122-124924757415680/AnsiballZ_file.py'
Dec 04 10:20:24 compute-0 sudo[109808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:24 compute-0 python3.9[109810]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 04 10:20:24 compute-0 sudo[109808]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:25 compute-0 ceph-mon[75358]: pgmap v287: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:25 compute-0 python3.9[109960]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:20:26 compute-0 sudo[110112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xexgyiywjezcloalmzjscrdrvbmdjlqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843625.8678627-138-173336845638704/AnsiballZ_dnf.py'
Dec 04 10:20:26 compute-0 sudo[110112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:26 compute-0 ceph-mon[75358]: 8.17 scrub starts
Dec 04 10:20:26 compute-0 ceph-mon[75358]: 8.17 scrub ok
Dec 04 10:20:26 compute-0 python3.9[110114]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:20:26 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 04 10:20:26 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 04 10:20:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:20:26
Dec 04 10:20:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:20:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:20:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'backups', 'default.rgw.control']
Dec 04 10:20:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:20:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:26 compute-0 sshd-session[110116]: Invalid user supermaint from 217.154.62.22 port 47926
Dec 04 10:20:27 compute-0 sshd-session[110116]: Received disconnect from 217.154.62.22 port 47926:11: Bye Bye [preauth]
Dec 04 10:20:27 compute-0 sshd-session[110116]: Disconnected from invalid user supermaint 217.154.62.22 port 47926 [preauth]
Dec 04 10:20:27 compute-0 ceph-mon[75358]: pgmap v288: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:27 compute-0 sudo[110112]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:20:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:20:28 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 04 10:20:28 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 04 10:20:28 compute-0 ceph-mon[75358]: 11.13 scrub starts
Dec 04 10:20:28 compute-0 ceph-mon[75358]: 11.13 scrub ok
Dec 04 10:20:28 compute-0 sudo[110267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gemuffdptrhkuwkwyeugtieonlfpjmki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843627.9000187-147-233568771989622/AnsiballZ_dnf.py'
Dec 04 10:20:28 compute-0 sudo[110267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:28 compute-0 python3.9[110269]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:20:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 04 10:20:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 04 10:20:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:29 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 04 10:20:29 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 04 10:20:29 compute-0 ceph-mon[75358]: 9.19 scrub starts
Dec 04 10:20:29 compute-0 ceph-mon[75358]: 9.19 scrub ok
Dec 04 10:20:29 compute-0 ceph-mon[75358]: pgmap v289: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:29 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 04 10:20:29 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 04 10:20:29 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 04 10:20:29 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 04 10:20:29 compute-0 sudo[110267]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:30 compute-0 ceph-mon[75358]: 8.1 scrub starts
Dec 04 10:20:30 compute-0 ceph-mon[75358]: 8.1 scrub ok
Dec 04 10:20:30 compute-0 ceph-mon[75358]: 9.6 scrub starts
Dec 04 10:20:30 compute-0 ceph-mon[75358]: 9.6 scrub ok
Dec 04 10:20:30 compute-0 ceph-mon[75358]: 10.16 scrub starts
Dec 04 10:20:30 compute-0 ceph-mon[75358]: 10.16 scrub ok
Dec 04 10:20:30 compute-0 sudo[110420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ularxevyqjdrmvdagetnxvgjvbncdnmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843630.1033576-159-3627965170979/AnsiballZ_stat.py'
Dec 04 10:20:30 compute-0 sudo[110420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 04 10:20:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 04 10:20:30 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec 04 10:20:30 compute-0 python3.9[110422]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:20:30 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec 04 10:20:30 compute-0 sudo[110420]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:31 compute-0 sudo[110574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxmxkijxmbmzffxjtaxlzmpufylrjesl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843630.709471-167-106615699232428/AnsiballZ_slurp.py'
Dec 04 10:20:31 compute-0 sudo[110574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:31 compute-0 python3.9[110576]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 04 10:20:31 compute-0 sudo[110574]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:31 compute-0 ceph-mon[75358]: 11.0 scrub starts
Dec 04 10:20:31 compute-0 ceph-mon[75358]: 11.0 scrub ok
Dec 04 10:20:31 compute-0 ceph-mon[75358]: 11.19 scrub starts
Dec 04 10:20:31 compute-0 ceph-mon[75358]: 11.19 scrub ok
Dec 04 10:20:31 compute-0 ceph-mon[75358]: pgmap v290: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:32 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 04 10:20:32 compute-0 sshd-session[107290]: Connection closed by 192.168.122.30 port 44858
Dec 04 10:20:32 compute-0 sshd-session[107287]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:20:32 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Dec 04 10:20:32 compute-0 systemd[1]: session-36.scope: Consumed 17.989s CPU time.
Dec 04 10:20:32 compute-0 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Dec 04 10:20:32 compute-0 systemd-logind[798]: Removed session 36.
Dec 04 10:20:32 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 04 10:20:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:32 compute-0 ceph-mon[75358]: 8.3 scrub starts
Dec 04 10:20:32 compute-0 ceph-mon[75358]: 8.3 scrub ok
Dec 04 10:20:33 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 04 10:20:33 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 04 10:20:33 compute-0 ceph-mon[75358]: 9.7 scrub starts
Dec 04 10:20:33 compute-0 ceph-mon[75358]: 9.7 scrub ok
Dec 04 10:20:33 compute-0 ceph-mon[75358]: pgmap v291: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:34 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 04 10:20:34 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 04 10:20:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:34 compute-0 ceph-mon[75358]: 8.18 scrub starts
Dec 04 10:20:34 compute-0 ceph-mon[75358]: 8.18 scrub ok
Dec 04 10:20:35 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 04 10:20:35 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 04 10:20:35 compute-0 sshd-session[110601]: Invalid user zimbra from 74.249.218.27 port 58920
Dec 04 10:20:35 compute-0 sshd-session[110601]: Received disconnect from 74.249.218.27 port 58920:11: Bye Bye [preauth]
Dec 04 10:20:35 compute-0 sshd-session[110601]: Disconnected from invalid user zimbra 74.249.218.27 port 58920 [preauth]
Dec 04 10:20:35 compute-0 ceph-mon[75358]: 8.10 scrub starts
Dec 04 10:20:35 compute-0 ceph-mon[75358]: 8.10 scrub ok
Dec 04 10:20:35 compute-0 ceph-mon[75358]: pgmap v292: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:35 compute-0 ceph-mon[75358]: 9.c scrub starts
Dec 04 10:20:35 compute-0 ceph-mon[75358]: 9.c scrub ok
Dec 04 10:20:36 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 04 10:20:36 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 04 10:20:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 04 10:20:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:36 compute-0 ceph-mon[75358]: 9.f scrub starts
Dec 04 10:20:36 compute-0 ceph-mon[75358]: 9.f scrub ok
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:20:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:20:37 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 04 10:20:37 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 04 10:20:37 compute-0 sshd-session[110603]: Accepted publickey for zuul from 192.168.122.30 port 60234 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:20:37 compute-0 systemd-logind[798]: New session 37 of user zuul.
Dec 04 10:20:37 compute-0 systemd[1]: Started Session 37 of User zuul.
Dec 04 10:20:37 compute-0 sshd-session[110603]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:20:37 compute-0 ceph-mon[75358]: 8.8 scrub starts
Dec 04 10:20:37 compute-0 ceph-mon[75358]: 8.8 scrub ok
Dec 04 10:20:37 compute-0 ceph-mon[75358]: pgmap v293: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:38 compute-0 python3.9[110756]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:20:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:38 compute-0 ceph-mon[75358]: 10.1e scrub starts
Dec 04 10:20:38 compute-0 ceph-mon[75358]: 10.1e scrub ok
Dec 04 10:20:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 04 10:20:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 04 10:20:39 compute-0 python3.9[110910]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:20:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:40 compute-0 ceph-mon[75358]: pgmap v294: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:40 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 04 10:20:40 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 04 10:20:40 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 04 10:20:40 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 04 10:20:40 compute-0 python3.9[111103]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:20:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:41 compute-0 ceph-mon[75358]: 8.a scrub starts
Dec 04 10:20:41 compute-0 ceph-mon[75358]: 8.a scrub ok
Dec 04 10:20:41 compute-0 ceph-mon[75358]: 9.17 scrub starts
Dec 04 10:20:41 compute-0 ceph-mon[75358]: 9.17 scrub ok
Dec 04 10:20:41 compute-0 sshd-session[110606]: Connection closed by 192.168.122.30 port 60234
Dec 04 10:20:41 compute-0 sshd-session[110603]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:20:41 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Dec 04 10:20:41 compute-0 systemd[1]: session-37.scope: Consumed 2.221s CPU time.
Dec 04 10:20:41 compute-0 systemd-logind[798]: Session 37 logged out. Waiting for processes to exit.
Dec 04 10:20:41 compute-0 systemd-logind[798]: Removed session 37.
Dec 04 10:20:41 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 04 10:20:41 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 04 10:20:42 compute-0 ceph-mon[75358]: 11.c scrub starts
Dec 04 10:20:42 compute-0 ceph-mon[75358]: 11.c scrub ok
Dec 04 10:20:42 compute-0 ceph-mon[75358]: pgmap v295: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:42 compute-0 ceph-mon[75358]: 11.6 scrub starts
Dec 04 10:20:42 compute-0 ceph-mon[75358]: 11.6 scrub ok
Dec 04 10:20:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:43 compute-0 ceph-mon[75358]: pgmap v296: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 04 10:20:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 04 10:20:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:45 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec 04 10:20:45 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec 04 10:20:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 04 10:20:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 04 10:20:45 compute-0 ceph-mon[75358]: 11.a scrub starts
Dec 04 10:20:45 compute-0 ceph-mon[75358]: 11.a scrub ok
Dec 04 10:20:45 compute-0 ceph-mon[75358]: pgmap v297: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:46 compute-0 ceph-mon[75358]: 8.9 scrub starts
Dec 04 10:20:46 compute-0 ceph-mon[75358]: 8.9 scrub ok
Dec 04 10:20:46 compute-0 ceph-mon[75358]: 8.7 scrub starts
Dec 04 10:20:46 compute-0 ceph-mon[75358]: 8.7 scrub ok
Dec 04 10:20:47 compute-0 ceph-mon[75358]: pgmap v298: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:48 compute-0 sshd-session[111129]: Accepted publickey for zuul from 192.168.122.30 port 54166 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:20:48 compute-0 systemd-logind[798]: New session 38 of user zuul.
Dec 04 10:20:48 compute-0 systemd[1]: Started Session 38 of User zuul.
Dec 04 10:20:48 compute-0 sshd-session[111129]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:20:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:49 compute-0 python3.9[111282]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:20:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:49 compute-0 python3.9[111436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:20:49 compute-0 ceph-mon[75358]: pgmap v299: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:50 compute-0 sudo[111590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyvxehucncqulpuiwimptzmdioptuxfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843650.282898-40-41020767592859/AnsiballZ_setup.py'
Dec 04 10:20:50 compute-0 sudo[111590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:50 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 04 10:20:50 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 04 10:20:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:50 compute-0 python3.9[111592]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:20:51 compute-0 sudo[111590]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:51 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 04 10:20:51 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 04 10:20:51 compute-0 sudo[111676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exmtuidbdnntixyrqlohaoqqovtilswa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843650.282898-40-41020767592859/AnsiballZ_dnf.py'
Dec 04 10:20:51 compute-0 sudo[111676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:51 compute-0 ceph-mon[75358]: 10.7 scrub starts
Dec 04 10:20:51 compute-0 ceph-mon[75358]: 10.7 scrub ok
Dec 04 10:20:51 compute-0 ceph-mon[75358]: pgmap v300: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:51 compute-0 python3.9[111678]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:20:52 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 04 10:20:52 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 04 10:20:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:52 compute-0 ceph-mon[75358]: 10.4 scrub starts
Dec 04 10:20:52 compute-0 ceph-mon[75358]: 10.4 scrub ok
Dec 04 10:20:53 compute-0 sudo[111676]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 04 10:20:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 04 10:20:53 compute-0 sudo[111829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umkiuxcudzztrpwrquufgduzvgqjbgbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843653.5245876-52-123529208782866/AnsiballZ_setup.py'
Dec 04 10:20:53 compute-0 sudo[111829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:53 compute-0 ceph-mon[75358]: 11.5 scrub starts
Dec 04 10:20:53 compute-0 ceph-mon[75358]: 11.5 scrub ok
Dec 04 10:20:53 compute-0 ceph-mon[75358]: pgmap v301: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:54 compute-0 python3.9[111831]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:20:54 compute-0 sudo[111829]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:20:54 compute-0 sudo[112024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feeocethgyxmopoohsdezhtqxrnxkyek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843654.53816-63-215037478724683/AnsiballZ_file.py'
Dec 04 10:20:54 compute-0 sudo[112024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:54 compute-0 ceph-mon[75358]: 8.0 scrub starts
Dec 04 10:20:54 compute-0 ceph-mon[75358]: 8.0 scrub ok
Dec 04 10:20:55 compute-0 python3.9[112026]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:20:55 compute-0 sudo[112024]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:55 compute-0 sudo[112176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvuswisxnzvmljkdldshkxguzdldpzus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843655.3345568-71-112447278801536/AnsiballZ_command.py'
Dec 04 10:20:55 compute-0 sudo[112176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:55 compute-0 python3.9[112178]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:20:55 compute-0 ceph-mon[75358]: pgmap v302: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:55 compute-0 sudo[112176]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:56 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec 04 10:20:56 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec 04 10:20:56 compute-0 sudo[112341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deogpahaajpvdriwirfaausegeaurvti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843656.1908598-79-254855400591603/AnsiballZ_stat.py'
Dec 04 10:20:56 compute-0 sudo[112341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:56 compute-0 python3.9[112343]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:20:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:56 compute-0 sudo[112341]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:57 compute-0 sudo[112419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtkdcxpparuohzcvhilcgghcmczvuigc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843656.1908598-79-254855400591603/AnsiballZ_file.py'
Dec 04 10:20:57 compute-0 sudo[112419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:57 compute-0 python3.9[112421]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:20:57 compute-0 sudo[112419]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:57 compute-0 sudo[112573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfvpcdmqfwnuhdlgvjraclqfirwjjmvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843657.3730206-91-271464927911944/AnsiballZ_stat.py'
Dec 04 10:20:57 compute-0 sudo[112573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:57 compute-0 python3.9[112575]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:20:57 compute-0 sudo[112573]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:20:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:20:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:20:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:20:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:20:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:20:57 compute-0 ceph-mon[75358]: 8.5 scrub starts
Dec 04 10:20:57 compute-0 ceph-mon[75358]: 8.5 scrub ok
Dec 04 10:20:57 compute-0 ceph-mon[75358]: pgmap v303: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:58 compute-0 sudo[112651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvaiqousbbyrcqewimrfckohyrggtnuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843657.3730206-91-271464927911944/AnsiballZ_file.py'
Dec 04 10:20:58 compute-0 sudo[112651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:58 compute-0 python3.9[112653]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:20:58 compute-0 sudo[112651]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:58 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 04 10:20:58 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 04 10:20:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:20:58 compute-0 sudo[112803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfpekuwohazdnlikggjwttmdbkuvogwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843658.4678016-104-89636767680245/AnsiballZ_ini_file.py'
Dec 04 10:20:58 compute-0 sudo[112803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:59 compute-0 ceph-mon[75358]: 11.1 scrub starts
Dec 04 10:20:59 compute-0 ceph-mon[75358]: 11.1 scrub ok
Dec 04 10:20:59 compute-0 python3.9[112805]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:20:59 compute-0 sudo[112803]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:59 compute-0 sshd-session[112446]: Connection reset by authenticating user root 91.202.233.33 port 33670 [preauth]
Dec 04 10:20:59 compute-0 sudo[112956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yllbzmxbhmqyflnjazppclbygknxqnok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843659.2846746-104-153730709580011/AnsiballZ_ini_file.py'
Dec 04 10:20:59 compute-0 sudo[112956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:20:59 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 04 10:20:59 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 04 10:20:59 compute-0 python3.9[112958]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:20:59 compute-0 sudo[112956]: pam_unix(sudo:session): session closed for user root
Dec 04 10:20:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:00 compute-0 sudo[113109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bziglfklgnqpbhapvfbtzzpgrtstqztl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843659.8908582-104-136170148800800/AnsiballZ_ini_file.py'
Dec 04 10:21:00 compute-0 sudo[113109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:00 compute-0 python3.9[113111]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:21:00 compute-0 sudo[113109]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:00 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 04 10:21:00 compute-0 ceph-mon[75358]: pgmap v304: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:00 compute-0 ceph-mon[75358]: 11.4 scrub starts
Dec 04 10:21:00 compute-0 ceph-mon[75358]: 11.4 scrub ok
Dec 04 10:21:00 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.423427) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660423565, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7246, "num_deletes": 251, "total_data_size": 9406135, "memory_usage": 9577264, "flush_reason": "Manual Compaction"}
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660501255, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7481196, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7389, "table_properties": {"data_size": 7454574, "index_size": 17291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 76305, "raw_average_key_size": 23, "raw_value_size": 7391644, "raw_average_value_size": 2250, "num_data_blocks": 759, "num_entries": 3284, "num_filter_entries": 3284, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843246, "oldest_key_time": 1764843246, "file_creation_time": 1764843660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 77996 microseconds, and 15613 cpu microseconds.
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.501438) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7481196 bytes OK
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.501530) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.503605) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.503626) EVENT_LOG_v1 {"time_micros": 1764843660503620, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.503666) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9374482, prev total WAL file size 9374482, number of live WAL files 2.
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.506214) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7305KB) 13(58KB) 8(1944B)]
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660506339, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7543100, "oldest_snapshot_seqno": -1}
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3110 keys, 7495942 bytes, temperature: kUnknown
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660626038, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7495942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7469740, "index_size": 17324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 74745, "raw_average_key_size": 24, "raw_value_size": 7408164, "raw_average_value_size": 2382, "num_data_blocks": 762, "num_entries": 3110, "num_filter_entries": 3110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764843660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.626500) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7495942 bytes
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.628615) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.9 rd, 62.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.2, 0.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3399, records dropped: 289 output_compression: NoCompression
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.628642) EVENT_LOG_v1 {"time_micros": 1764843660628629, "job": 4, "event": "compaction_finished", "compaction_time_micros": 119845, "compaction_time_cpu_micros": 17932, "output_level": 6, "num_output_files": 1, "total_output_size": 7495942, "num_input_records": 3399, "num_output_records": 3110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660630652, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660630725, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660630817, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 04 10:21:00 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.506019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:21:00 compute-0 sudo[113262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysvyqihsvboxwrjmfgvsobiwtwntmqhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843660.4735494-104-216575669299059/AnsiballZ_ini_file.py'
Dec 04 10:21:00 compute-0 sudo[113262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:00 compute-0 python3.9[113264]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:21:00 compute-0 sudo[113262]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:01 compute-0 sshd-session[112908]: Invalid user admin from 91.202.233.33 port 34636
Dec 04 10:21:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 04 10:21:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 04 10:21:01 compute-0 ceph-mon[75358]: 11.7 scrub starts
Dec 04 10:21:01 compute-0 ceph-mon[75358]: 11.7 scrub ok
Dec 04 10:21:01 compute-0 ceph-mon[75358]: pgmap v305: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:01 compute-0 sudo[113414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydynsdleqioojgenatqrhasmfnueftht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843661.1565807-135-166850283253328/AnsiballZ_dnf.py'
Dec 04 10:21:01 compute-0 sudo[113414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:01 compute-0 sshd-session[112908]: Connection reset by invalid user admin 91.202.233.33 port 34636 [preauth]
Dec 04 10:21:01 compute-0 python3.9[113416]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:21:02 compute-0 ceph-mon[75358]: 8.19 scrub starts
Dec 04 10:21:02 compute-0 ceph-mon[75358]: 8.19 scrub ok
Dec 04 10:21:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:03 compute-0 sudo[113414]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:03 compute-0 sshd-session[113417]: Invalid user usuario from 91.202.233.33 port 34652
Dec 04 10:21:03 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 04 10:21:03 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 04 10:21:03 compute-0 ceph-mon[75358]: pgmap v306: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:03 compute-0 sshd-session[113417]: Connection reset by invalid user usuario 91.202.233.33 port 34652 [preauth]
Dec 04 10:21:03 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 04 10:21:03 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 04 10:21:03 compute-0 sudo[113570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fayzzrafdewxssfphjmwniknomwsaegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843663.396007-146-271082033407177/AnsiballZ_setup.py'
Dec 04 10:21:03 compute-0 sudo[113570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:03 compute-0 python3.9[113572]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:21:03 compute-0 sudo[113570]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:04 compute-0 sudo[113725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foudgaigeysszgriuvalxqhinubpfmnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843664.1422663-154-57070453874249/AnsiballZ_stat.py'
Dec 04 10:21:04 compute-0 sudo[113725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:04 compute-0 ceph-mon[75358]: 11.1d scrub starts
Dec 04 10:21:04 compute-0 ceph-mon[75358]: 11.1d scrub ok
Dec 04 10:21:04 compute-0 ceph-mon[75358]: 11.10 scrub starts
Dec 04 10:21:04 compute-0 ceph-mon[75358]: 11.10 scrub ok
Dec 04 10:21:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 04 10:21:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 04 10:21:04 compute-0 python3.9[113727]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:21:04 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 04 10:21:04 compute-0 sudo[113725]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:04 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 04 10:21:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:05 compute-0 sudo[113877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnglwmrwyxwrlfwgrbnbjhuvcvodahlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843664.85752-163-254301305330479/AnsiballZ_stat.py'
Dec 04 10:21:05 compute-0 sudo[113877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:05 compute-0 python3.9[113879]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:21:05 compute-0 sudo[113877]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 04 10:21:05 compute-0 ceph-mon[75358]: 8.1e scrub starts
Dec 04 10:21:05 compute-0 ceph-mon[75358]: 8.1e scrub ok
Dec 04 10:21:05 compute-0 ceph-mon[75358]: 10.8 scrub starts
Dec 04 10:21:05 compute-0 ceph-mon[75358]: 10.8 scrub ok
Dec 04 10:21:05 compute-0 ceph-mon[75358]: pgmap v307: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 04 10:21:05 compute-0 sudo[114029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-towhdjwfcvjpjaroqltggwaipsmoagyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843665.602657-173-243296032573443/AnsiballZ_command.py'
Dec 04 10:21:05 compute-0 sudo[114029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:06 compute-0 python3.9[114031]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:21:06 compute-0 sudo[114029]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:06 compute-0 ceph-mon[75358]: 8.13 scrub starts
Dec 04 10:21:06 compute-0 ceph-mon[75358]: 8.13 scrub ok
Dec 04 10:21:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:07 compute-0 sudo[114182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlufvubsightkzxmgrmrvxllprsgeemt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843666.5647595-183-95322769695230/AnsiballZ_service_facts.py'
Dec 04 10:21:07 compute-0 sudo[114182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:07 compute-0 python3.9[114184]: ansible-service_facts Invoked
Dec 04 10:21:07 compute-0 network[114201]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 04 10:21:07 compute-0 network[114202]: 'network-scripts' will be removed from distribution in near future.
Dec 04 10:21:07 compute-0 ceph-mon[75358]: pgmap v308: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:07 compute-0 network[114203]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 04 10:21:08 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 04 10:21:08 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 04 10:21:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:09 compute-0 ceph-mon[75358]: 10.f scrub starts
Dec 04 10:21:09 compute-0 ceph-mon[75358]: 10.f scrub ok
Dec 04 10:21:09 compute-0 ceph-mon[75358]: pgmap v309: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:09 compute-0 sudo[114182]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 04 10:21:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 04 10:21:10 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 04 10:21:10 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 04 10:21:10 compute-0 sudo[114486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvqmgnekgqhbybfafoybghejkggtxqaj ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764843670.5141888-198-216148404557755/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764843670.5141888-198-216148404557755/args'
Dec 04 10:21:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:10 compute-0 sudo[114486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:10 compute-0 sudo[114486]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:10 compute-0 sshd-session[113558]: Connection reset by authenticating user root 91.202.233.33 port 34660 [preauth]
Dec 04 10:21:11 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 04 10:21:11 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 04 10:21:11 compute-0 sudo[114653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kawxjfkrsttfptokpdukkmvayyjexyct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843671.2695546-209-51451434568972/AnsiballZ_dnf.py'
Dec 04 10:21:11 compute-0 sudo[114653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:11 compute-0 python3.9[114655]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:21:11 compute-0 ceph-mon[75358]: 10.b scrub starts
Dec 04 10:21:11 compute-0 ceph-mon[75358]: 10.b scrub ok
Dec 04 10:21:11 compute-0 ceph-mon[75358]: 10.17 scrub starts
Dec 04 10:21:11 compute-0 ceph-mon[75358]: 10.17 scrub ok
Dec 04 10:21:11 compute-0 ceph-mon[75358]: pgmap v310: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:12 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 04 10:21:12 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 04 10:21:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:12 compute-0 ceph-mon[75358]: 10.2 scrub starts
Dec 04 10:21:12 compute-0 ceph-mon[75358]: 10.2 scrub ok
Dec 04 10:21:13 compute-0 sudo[114653]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:13 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 04 10:21:13 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 04 10:21:13 compute-0 ceph-mon[75358]: 8.b scrub starts
Dec 04 10:21:13 compute-0 ceph-mon[75358]: 8.b scrub ok
Dec 04 10:21:13 compute-0 ceph-mon[75358]: pgmap v311: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:14 compute-0 sudo[114806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dirnubpeeqmpkxlwnfpxdhcwgxgcetgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843673.447125-222-262391246227972/AnsiballZ_package_facts.py'
Dec 04 10:21:14 compute-0 sudo[114806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:14 compute-0 python3.9[114808]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 04 10:21:14 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 04 10:21:14 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 04 10:21:14 compute-0 sudo[114806]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:14 compute-0 ceph-mon[75358]: 10.6 scrub starts
Dec 04 10:21:14 compute-0 ceph-mon[75358]: 10.6 scrub ok
Dec 04 10:21:15 compute-0 sudo[114960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hauroiovckwnwsnnpbkilbxwreqdjyih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843674.9146373-232-8631633523479/AnsiballZ_stat.py'
Dec 04 10:21:15 compute-0 sudo[114960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:15 compute-0 python3.9[114962]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:15 compute-0 sudo[114960]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:15 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 04 10:21:15 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 04 10:21:15 compute-0 sudo[115038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzzvqxsyibnweixzatwllhybggdthxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843674.9146373-232-8631633523479/AnsiballZ_file.py'
Dec 04 10:21:15 compute-0 sudo[115038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:15 compute-0 python3.9[115040]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:15 compute-0 sudo[115038]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:15 compute-0 ceph-mon[75358]: 8.e scrub starts
Dec 04 10:21:15 compute-0 ceph-mon[75358]: 8.e scrub ok
Dec 04 10:21:15 compute-0 ceph-mon[75358]: pgmap v312: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:16 compute-0 sudo[115079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:21:16 compute-0 sudo[115079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:21:16 compute-0 sudo[115079]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:16 compute-0 sudo[115134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:21:16 compute-0 sudo[115134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:21:16 compute-0 sudo[115240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxvrnzldxtfsmxqzcxdirogqylxoinax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843676.2707498-244-64050769785074/AnsiballZ_stat.py'
Dec 04 10:21:16 compute-0 sudo[115240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:16 compute-0 python3.9[115242]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:16 compute-0 sudo[115240]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:16 compute-0 sudo[115134]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:21:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:21:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:21:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:21:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:21:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:21:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:21:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:21:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:21:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:21:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:21:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:21:16 compute-0 sudo[115324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:21:16 compute-0 sudo[115324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:21:16 compute-0 sudo[115324]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:16 compute-0 sudo[115374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqqbenxotnxtvjocqrtolgynnjydiqse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843676.2707498-244-64050769785074/AnsiballZ_file.py'
Dec 04 10:21:16 compute-0 sudo[115374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:16 compute-0 ceph-mon[75358]: 8.f scrub starts
Dec 04 10:21:16 compute-0 ceph-mon[75358]: 8.f scrub ok
Dec 04 10:21:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:21:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:21:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:21:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:21:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:21:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:21:17 compute-0 sudo[115377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:21:17 compute-0 sudo[115377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:21:17 compute-0 sshd-session[114869]: Invalid user terraria from 103.179.218.243 port 41592
Dec 04 10:21:17 compute-0 python3.9[115381]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:17 compute-0 sudo[115374]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:17 compute-0 sshd-session[114869]: Received disconnect from 103.179.218.243 port 41592:11: Bye Bye [preauth]
Dec 04 10:21:17 compute-0 sshd-session[114869]: Disconnected from invalid user terraria 103.179.218.243 port 41592 [preauth]
Dec 04 10:21:17 compute-0 podman[115427]: 2025-12-04 10:21:17.314773502 +0000 UTC m=+0.058278685 container create 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:21:17 compute-0 systemd[1]: Started libpod-conmon-4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5.scope.
Dec 04 10:21:17 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:21:17 compute-0 podman[115427]: 2025-12-04 10:21:17.292834813 +0000 UTC m=+0.036339986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:21:17 compute-0 podman[115427]: 2025-12-04 10:21:17.399146783 +0000 UTC m=+0.142651986 container init 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:21:17 compute-0 podman[115427]: 2025-12-04 10:21:17.40763197 +0000 UTC m=+0.151137143 container start 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:21:17 compute-0 podman[115427]: 2025-12-04 10:21:17.411688534 +0000 UTC m=+0.155193697 container attach 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:21:17 compute-0 priceless_mclaren[115456]: 167 167
Dec 04 10:21:17 compute-0 systemd[1]: libpod-4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5.scope: Deactivated successfully.
Dec 04 10:21:17 compute-0 podman[115427]: 2025-12-04 10:21:17.415448531 +0000 UTC m=+0.158953694 container died 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:21:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-05d2ac38f4a267929b91a0fc2a7c22c2a720e993cefc3f96e56bfd3a5138cc0e-merged.mount: Deactivated successfully.
Dec 04 10:21:17 compute-0 podman[115427]: 2025-12-04 10:21:17.462844843 +0000 UTC m=+0.206350006 container remove 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:21:17 compute-0 systemd[1]: libpod-conmon-4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5.scope: Deactivated successfully.
Dec 04 10:21:17 compute-0 podman[115479]: 2025-12-04 10:21:17.609748246 +0000 UTC m=+0.048082818 container create 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:21:17 compute-0 systemd[1]: Started libpod-conmon-7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054.scope.
Dec 04 10:21:17 compute-0 podman[115479]: 2025-12-04 10:21:17.585221467 +0000 UTC m=+0.023556059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:21:17 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:17 compute-0 podman[115479]: 2025-12-04 10:21:17.71751879 +0000 UTC m=+0.155853402 container init 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 04 10:21:17 compute-0 podman[115479]: 2025-12-04 10:21:17.723603241 +0000 UTC m=+0.161937833 container start 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:21:17 compute-0 podman[115479]: 2025-12-04 10:21:17.728013195 +0000 UTC m=+0.166347787 container attach 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 04 10:21:17 compute-0 ceph-mon[75358]: pgmap v313: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:18 compute-0 sudo[115633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfkfrncgiybtdnoamdmsseipyfubpjvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843677.653081-262-164959464818442/AnsiballZ_lineinfile.py'
Dec 04 10:21:18 compute-0 sudo[115633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:18 compute-0 eager_ellis[115519]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:21:18 compute-0 eager_ellis[115519]: --> All data devices are unavailable
Dec 04 10:21:18 compute-0 systemd[1]: libpod-7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054.scope: Deactivated successfully.
Dec 04 10:21:18 compute-0 podman[115479]: 2025-12-04 10:21:18.259006273 +0000 UTC m=+0.697340855 container died 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:21:18 compute-0 python3.9[115635]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e-merged.mount: Deactivated successfully.
Dec 04 10:21:18 compute-0 sudo[115633]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:18 compute-0 podman[115479]: 2025-12-04 10:21:18.304948671 +0000 UTC m=+0.743283223 container remove 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:21:18 compute-0 systemd[1]: libpod-conmon-7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054.scope: Deactivated successfully.
Dec 04 10:21:18 compute-0 sudo[115377]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:18 compute-0 sudo[115682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:21:18 compute-0 sudo[115682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:21:18 compute-0 sudo[115682]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:18 compute-0 sudo[115707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:21:18 compute-0 sudo[115707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:21:18 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 04 10:21:18 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 04 10:21:18 compute-0 podman[115744]: 2025-12-04 10:21:18.725317258 +0000 UTC m=+0.041752761 container create 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 04 10:21:18 compute-0 systemd[1]: Started libpod-conmon-2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c.scope.
Dec 04 10:21:18 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:21:18 compute-0 podman[115744]: 2025-12-04 10:21:18.705457777 +0000 UTC m=+0.021893330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:21:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:19 compute-0 sudo[115889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etyphaqeomhnefdtrksgnisqrmwrqjjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843678.8076572-277-243777927576615/AnsiballZ_setup.py'
Dec 04 10:21:19 compute-0 sudo[115889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:19 compute-0 python3.9[115891]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:21:19 compute-0 sudo[115889]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:19 compute-0 podman[115744]: 2025-12-04 10:21:19.6644148 +0000 UTC m=+0.980850333 container init 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:21:19 compute-0 ceph-mon[75358]: 10.e scrub starts
Dec 04 10:21:19 compute-0 ceph-mon[75358]: 10.e scrub ok
Dec 04 10:21:19 compute-0 podman[115744]: 2025-12-04 10:21:19.676878598 +0000 UTC m=+0.993314101 container start 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:21:19 compute-0 podman[115744]: 2025-12-04 10:21:19.682984681 +0000 UTC m=+0.999420204 container attach 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:21:19 compute-0 beautiful_cray[115761]: 167 167
Dec 04 10:21:19 compute-0 systemd[1]: libpod-2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c.scope: Deactivated successfully.
Dec 04 10:21:19 compute-0 podman[115744]: 2025-12-04 10:21:19.685458179 +0000 UTC m=+1.001893712 container died 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b6c9eac27a5de9d5912741c69e630705b8ab6b9136647247ff63c028db7875-merged.mount: Deactivated successfully.
Dec 04 10:21:19 compute-0 podman[115744]: 2025-12-04 10:21:19.722281434 +0000 UTC m=+1.038716937 container remove 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 04 10:21:19 compute-0 systemd[1]: libpod-conmon-2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c.scope: Deactivated successfully.
Dec 04 10:21:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:19 compute-0 podman[115920]: 2025-12-04 10:21:19.870017826 +0000 UTC m=+0.048088988 container create 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:21:19 compute-0 systemd[1]: Started libpod-conmon-40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05.scope.
Dec 04 10:21:19 compute-0 podman[115920]: 2025-12-04 10:21:19.846011619 +0000 UTC m=+0.024082841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:21:19 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:19 compute-0 podman[115920]: 2025-12-04 10:21:19.979125842 +0000 UTC m=+0.157197024 container init 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:21:19 compute-0 podman[115920]: 2025-12-04 10:21:19.986463563 +0000 UTC m=+0.164534725 container start 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:21:19 compute-0 podman[115920]: 2025-12-04 10:21:19.990271481 +0000 UTC m=+0.168342643 container attach 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:21:20 compute-0 sudo[116016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sufuepimualqsauioctvcknhlgorqlre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843678.8076572-277-243777927576615/AnsiballZ_systemd.py'
Dec 04 10:21:20 compute-0 sudo[116016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:20 compute-0 reverent_napier[115936]: {
Dec 04 10:21:20 compute-0 reverent_napier[115936]:     "0": [
Dec 04 10:21:20 compute-0 reverent_napier[115936]:         {
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "devices": [
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "/dev/loop3"
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             ],
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_name": "ceph_lv0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_size": "21470642176",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "name": "ceph_lv0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "tags": {
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.cluster_name": "ceph",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.crush_device_class": "",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.encrypted": "0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.objectstore": "bluestore",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.osd_id": "0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.type": "block",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.vdo": "0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.with_tpm": "0"
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             },
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "type": "block",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "vg_name": "ceph_vg0"
Dec 04 10:21:20 compute-0 reverent_napier[115936]:         }
Dec 04 10:21:20 compute-0 reverent_napier[115936]:     ],
Dec 04 10:21:20 compute-0 reverent_napier[115936]:     "1": [
Dec 04 10:21:20 compute-0 reverent_napier[115936]:         {
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "devices": [
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "/dev/loop4"
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             ],
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_name": "ceph_lv1",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_size": "21470642176",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "name": "ceph_lv1",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "tags": {
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.cluster_name": "ceph",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.crush_device_class": "",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.encrypted": "0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.objectstore": "bluestore",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.osd_id": "1",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.type": "block",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.vdo": "0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.with_tpm": "0"
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             },
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "type": "block",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "vg_name": "ceph_vg1"
Dec 04 10:21:20 compute-0 reverent_napier[115936]:         }
Dec 04 10:21:20 compute-0 reverent_napier[115936]:     ],
Dec 04 10:21:20 compute-0 reverent_napier[115936]:     "2": [
Dec 04 10:21:20 compute-0 reverent_napier[115936]:         {
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "devices": [
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "/dev/loop5"
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             ],
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_name": "ceph_lv2",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_size": "21470642176",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "name": "ceph_lv2",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "tags": {
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.cluster_name": "ceph",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.crush_device_class": "",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.encrypted": "0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.objectstore": "bluestore",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.osd_id": "2",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.type": "block",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.vdo": "0",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:                 "ceph.with_tpm": "0"
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             },
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "type": "block",
Dec 04 10:21:20 compute-0 reverent_napier[115936]:             "vg_name": "ceph_vg2"
Dec 04 10:21:20 compute-0 reverent_napier[115936]:         }
Dec 04 10:21:20 compute-0 reverent_napier[115936]:     ]
Dec 04 10:21:20 compute-0 reverent_napier[115936]: }
Dec 04 10:21:20 compute-0 systemd[1]: libpod-40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05.scope: Deactivated successfully.
Dec 04 10:21:20 compute-0 podman[115920]: 2025-12-04 10:21:20.330472906 +0000 UTC m=+0.508544098 container died 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c-merged.mount: Deactivated successfully.
Dec 04 10:21:20 compute-0 podman[115920]: 2025-12-04 10:21:20.386489247 +0000 UTC m=+0.564560419 container remove 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:21:20 compute-0 systemd[1]: libpod-conmon-40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05.scope: Deactivated successfully.
Dec 04 10:21:20 compute-0 sudo[115707]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:20 compute-0 sudo[116034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:21:20 compute-0 sudo[116034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:21:20 compute-0 sudo[116034]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:20 compute-0 sudo[116059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:21:20 compute-0 sudo[116059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:21:20 compute-0 python3.9[116020]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:21:20 compute-0 sudo[116016]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:20 compute-0 ceph-mon[75358]: pgmap v314: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:20 compute-0 podman[116122]: 2025-12-04 10:21:20.852082276 +0000 UTC m=+0.040593544 container create 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 04 10:21:20 compute-0 systemd[1]: Started libpod-conmon-2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986.scope.
Dec 04 10:21:20 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:21:20 compute-0 podman[116122]: 2025-12-04 10:21:20.919788459 +0000 UTC m=+0.108299747 container init 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:21:20 compute-0 podman[116122]: 2025-12-04 10:21:20.927045738 +0000 UTC m=+0.115557006 container start 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:21:20 compute-0 podman[116122]: 2025-12-04 10:21:20.930453237 +0000 UTC m=+0.118964515 container attach 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:21:20 compute-0 podman[116122]: 2025-12-04 10:21:20.836043953 +0000 UTC m=+0.024555241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:21:20 compute-0 vigilant_jones[116138]: 167 167
Dec 04 10:21:20 compute-0 systemd[1]: libpod-2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986.scope: Deactivated successfully.
Dec 04 10:21:20 compute-0 podman[116122]: 2025-12-04 10:21:20.934221164 +0000 UTC m=+0.122732432 container died 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-25166b00080af2e0cb5a66fe058f393799c356dc44b84bcb1d639f7fe177b03e-merged.mount: Deactivated successfully.
Dec 04 10:21:20 compute-0 podman[116122]: 2025-12-04 10:21:20.969337391 +0000 UTC m=+0.157848659 container remove 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 04 10:21:20 compute-0 systemd[1]: libpod-conmon-2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986.scope: Deactivated successfully.
Dec 04 10:21:21 compute-0 podman[116163]: 2025-12-04 10:21:21.180306583 +0000 UTC m=+0.066598368 container create bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:21:21 compute-0 systemd[1]: Started libpod-conmon-bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4.scope.
Dec 04 10:21:21 compute-0 podman[116163]: 2025-12-04 10:21:21.143876616 +0000 UTC m=+0.030168461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:21:21 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:21:21 compute-0 podman[116163]: 2025-12-04 10:21:21.280481521 +0000 UTC m=+0.166773276 container init bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:21:21 compute-0 podman[116163]: 2025-12-04 10:21:21.297352563 +0000 UTC m=+0.183644318 container start bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:21:21 compute-0 sshd-session[111132]: Connection closed by 192.168.122.30 port 54166
Dec 04 10:21:21 compute-0 podman[116163]: 2025-12-04 10:21:21.301351006 +0000 UTC m=+0.187642761 container attach bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:21:21 compute-0 sshd-session[111129]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:21:21 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Dec 04 10:21:21 compute-0 systemd[1]: session-38.scope: Consumed 23.189s CPU time.
Dec 04 10:21:21 compute-0 systemd-logind[798]: Session 38 logged out. Waiting for processes to exit.
Dec 04 10:21:21 compute-0 systemd-logind[798]: Removed session 38.
Dec 04 10:21:21 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 04 10:21:21 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 04 10:21:21 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 04 10:21:21 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 04 10:21:21 compute-0 ceph-mon[75358]: pgmap v315: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:22 compute-0 lvm[116258]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:21:22 compute-0 lvm[116259]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:21:22 compute-0 lvm[116259]: VG ceph_vg1 finished
Dec 04 10:21:22 compute-0 lvm[116258]: VG ceph_vg0 finished
Dec 04 10:21:22 compute-0 lvm[116261]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:21:22 compute-0 lvm[116261]: VG ceph_vg2 finished
Dec 04 10:21:22 compute-0 kind_faraday[116180]: {}
Dec 04 10:21:22 compute-0 systemd[1]: libpod-bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4.scope: Deactivated successfully.
Dec 04 10:21:22 compute-0 systemd[1]: libpod-bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4.scope: Consumed 1.499s CPU time.
Dec 04 10:21:22 compute-0 podman[116163]: 2025-12-04 10:21:22.200961629 +0000 UTC m=+1.087253404 container died bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 04 10:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e-merged.mount: Deactivated successfully.
Dec 04 10:21:22 compute-0 podman[116163]: 2025-12-04 10:21:22.248889522 +0000 UTC m=+1.135181277 container remove bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:21:22 compute-0 systemd[1]: libpod-conmon-bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4.scope: Deactivated successfully.
Dec 04 10:21:22 compute-0 sudo[116059]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:21:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:21:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:21:22 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 04 10:21:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:21:22 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 04 10:21:22 compute-0 sudo[116274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:21:22 compute-0 sudo[116274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:21:22 compute-0 sudo[116274]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:22 compute-0 ceph-mon[75358]: 10.19 scrub starts
Dec 04 10:21:22 compute-0 ceph-mon[75358]: 10.19 scrub ok
Dec 04 10:21:22 compute-0 ceph-mon[75358]: 10.d scrub starts
Dec 04 10:21:22 compute-0 ceph-mon[75358]: 10.d scrub ok
Dec 04 10:21:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:21:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:21:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:23 compute-0 ceph-mon[75358]: 10.1a scrub starts
Dec 04 10:21:23 compute-0 ceph-mon[75358]: 10.1a scrub ok
Dec 04 10:21:23 compute-0 ceph-mon[75358]: pgmap v316: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 04 10:21:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 04 10:21:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:25 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 04 10:21:25 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 04 10:21:25 compute-0 ceph-mon[75358]: 10.11 scrub starts
Dec 04 10:21:25 compute-0 ceph-mon[75358]: 10.11 scrub ok
Dec 04 10:21:25 compute-0 ceph-mon[75358]: pgmap v317: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:26 compute-0 sshd-session[116299]: Accepted publickey for zuul from 192.168.122.30 port 55210 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:21:26 compute-0 systemd-logind[798]: New session 39 of user zuul.
Dec 04 10:21:26 compute-0 systemd[1]: Started Session 39 of User zuul.
Dec 04 10:21:26 compute-0 sshd-session[116299]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:21:26 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 04 10:21:26 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 04 10:21:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:21:26
Dec 04 10:21:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:21:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:21:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Dec 04 10:21:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:21:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:26 compute-0 ceph-mon[75358]: 10.13 scrub starts
Dec 04 10:21:26 compute-0 ceph-mon[75358]: 10.13 scrub ok
Dec 04 10:21:27 compute-0 sudo[116452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpofdldphzwzeineenzzdxhdekvymkxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843686.6094382-22-79147176142245/AnsiballZ_file.py'
Dec 04 10:21:27 compute-0 sudo[116452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:27 compute-0 python3.9[116454]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:27 compute-0 sudo[116452]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:27 compute-0 ceph-mon[75358]: 10.15 scrub starts
Dec 04 10:21:27 compute-0 ceph-mon[75358]: 10.15 scrub ok
Dec 04 10:21:27 compute-0 ceph-mon[75358]: pgmap v318: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:27 compute-0 sudo[116604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwtdddxqvglwfiugrngehiqpevgetaqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843687.4924898-34-24569540326012/AnsiballZ_stat.py'
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:21:27 compute-0 sudo[116604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:21:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:21:28 compute-0 python3.9[116606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:28 compute-0 sudo[116604]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:28 compute-0 sudo[116682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvhxvefogwnsrxfnlotxzihmwrebpmdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843687.4924898-34-24569540326012/AnsiballZ_file.py'
Dec 04 10:21:28 compute-0 sudo[116682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:28 compute-0 python3.9[116684]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:28 compute-0 sudo[116682]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:28 compute-0 sshd-session[116302]: Connection closed by 192.168.122.30 port 55210
Dec 04 10:21:28 compute-0 sshd-session[116299]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:21:28 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Dec 04 10:21:28 compute-0 systemd[1]: session-39.scope: Consumed 1.497s CPU time.
Dec 04 10:21:28 compute-0 systemd-logind[798]: Session 39 logged out. Waiting for processes to exit.
Dec 04 10:21:28 compute-0 systemd-logind[798]: Removed session 39.
Dec 04 10:21:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:29 compute-0 ceph-mon[75358]: pgmap v319: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 04 10:21:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 04 10:21:30 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 04 10:21:30 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 04 10:21:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:30 compute-0 ceph-mon[75358]: 10.10 scrub starts
Dec 04 10:21:30 compute-0 ceph-mon[75358]: 10.10 scrub ok
Dec 04 10:21:31 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 04 10:21:31 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 04 10:21:31 compute-0 ceph-mon[75358]: 8.6 scrub starts
Dec 04 10:21:31 compute-0 ceph-mon[75358]: 8.6 scrub ok
Dec 04 10:21:31 compute-0 ceph-mon[75358]: pgmap v320: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:31 compute-0 ceph-mon[75358]: 10.14 scrub starts
Dec 04 10:21:31 compute-0 ceph-mon[75358]: 10.14 scrub ok
Dec 04 10:21:32 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 04 10:21:32 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 04 10:21:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:32 compute-0 ceph-mon[75358]: 10.12 scrub starts
Dec 04 10:21:32 compute-0 ceph-mon[75358]: 10.12 scrub ok
Dec 04 10:21:33 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 04 10:21:33 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 04 10:21:33 compute-0 ceph-mon[75358]: pgmap v321: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:33 compute-0 sshd-session[116709]: Invalid user postgres from 103.149.86.230 port 52378
Dec 04 10:21:34 compute-0 sshd-session[116709]: Received disconnect from 103.149.86.230 port 52378:11: Bye Bye [preauth]
Dec 04 10:21:34 compute-0 sshd-session[116709]: Disconnected from invalid user postgres 103.149.86.230 port 52378 [preauth]
Dec 04 10:21:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 04 10:21:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 04 10:21:34 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 04 10:21:34 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 04 10:21:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:34 compute-0 sshd-session[116711]: Accepted publickey for zuul from 192.168.122.30 port 33022 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:21:34 compute-0 systemd-logind[798]: New session 40 of user zuul.
Dec 04 10:21:34 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec 04 10:21:34 compute-0 sshd-session[116711]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:21:34 compute-0 ceph-mon[75358]: 10.9 scrub starts
Dec 04 10:21:34 compute-0 ceph-mon[75358]: 10.9 scrub ok
Dec 04 10:21:35 compute-0 ceph-mon[75358]: 9.15 scrub starts
Dec 04 10:21:35 compute-0 ceph-mon[75358]: 9.15 scrub ok
Dec 04 10:21:35 compute-0 ceph-mon[75358]: 9.1c scrub starts
Dec 04 10:21:35 compute-0 ceph-mon[75358]: 9.1c scrub ok
Dec 04 10:21:35 compute-0 ceph-mon[75358]: pgmap v322: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:35 compute-0 python3.9[116864]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:21:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 04 10:21:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 04 10:21:36 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 04 10:21:36 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:36 compute-0 sudo[117018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qblngxekgnozsoyfonusjployxpjszqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843696.4082541-33-38953665178933/AnsiballZ_file.py'
Dec 04 10:21:36 compute-0 sudo[117018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:36 compute-0 ceph-mon[75358]: 9.14 scrub starts
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:21:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:21:37 compute-0 python3.9[117020]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:37 compute-0 sudo[117018]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:37 compute-0 sshd-session[111601]: Connection closed by 101.47.163.20 port 43978 [preauth]
Dec 04 10:21:37 compute-0 sudo[117193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnvgrcbxujijflimisvragwhetbqhyaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843697.3135273-41-242360401228360/AnsiballZ_stat.py'
Dec 04 10:21:37 compute-0 sudo[117193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:37 compute-0 python3.9[117195]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:37 compute-0 ceph-mon[75358]: 9.14 scrub ok
Dec 04 10:21:37 compute-0 ceph-mon[75358]: 9.1b scrub starts
Dec 04 10:21:37 compute-0 ceph-mon[75358]: 9.1b scrub ok
Dec 04 10:21:37 compute-0 ceph-mon[75358]: pgmap v323: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:37 compute-0 sudo[117193]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:38 compute-0 sudo[117271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zigkmlsfroieqcrdrrmvyjnlobzgfswt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843697.3135273-41-242360401228360/AnsiballZ_file.py'
Dec 04 10:21:38 compute-0 sudo[117271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 04 10:21:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 04 10:21:38 compute-0 python3.9[117273]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.uzm0uy36 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:38 compute-0 sudo[117271]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:38 compute-0 ceph-mon[75358]: 9.0 scrub starts
Dec 04 10:21:39 compute-0 sudo[117423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rapelezohskunhmlsltklyevoaokbpst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843698.7488625-61-262447722279371/AnsiballZ_stat.py'
Dec 04 10:21:39 compute-0 sudo[117423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 04 10:21:39 compute-0 python3.9[117425]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 04 10:21:39 compute-0 sudo[117423]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:39 compute-0 sudo[117501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhkdosycdbjcbgunerkmmboyuhwmexaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843698.7488625-61-262447722279371/AnsiballZ_file.py'
Dec 04 10:21:39 compute-0 sudo[117501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:39 compute-0 python3.9[117503]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.jja9nxld recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:39 compute-0 sudo[117501]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:39 compute-0 ceph-mon[75358]: 9.0 scrub ok
Dec 04 10:21:39 compute-0 ceph-mon[75358]: pgmap v324: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:39 compute-0 ceph-mon[75358]: 9.2 scrub starts
Dec 04 10:21:39 compute-0 ceph-mon[75358]: 9.2 scrub ok
Dec 04 10:21:40 compute-0 sudo[117653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaxejocudjgtxsngbjigqhciazkhwcgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843700.0349653-74-139847269003376/AnsiballZ_file.py'
Dec 04 10:21:40 compute-0 sudo[117653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:40 compute-0 python3.9[117655]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:21:40 compute-0 sudo[117653]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:40 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 04 10:21:40 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 04 10:21:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:40 compute-0 sudo[117805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjkyyqpkqbadlmqjyczunpdkdnxegbfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843700.6234312-82-273303287555153/AnsiballZ_stat.py'
Dec 04 10:21:40 compute-0 sudo[117805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:40 compute-0 ceph-mon[75358]: 9.1d scrub starts
Dec 04 10:21:40 compute-0 ceph-mon[75358]: 9.1d scrub ok
Dec 04 10:21:41 compute-0 python3.9[117807]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:41 compute-0 sudo[117805]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:41 compute-0 sudo[117883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znshhgdbteqkfjsnkndmrsdjmwgyowaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843700.6234312-82-273303287555153/AnsiballZ_file.py'
Dec 04 10:21:41 compute-0 sudo[117883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:41 compute-0 python3.9[117885]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:21:41 compute-0 sudo[117883]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:41 compute-0 sudo[118035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duddejznpqslegtjxonkyihhudzjfwri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843701.7116942-82-208469508234381/AnsiballZ_stat.py'
Dec 04 10:21:41 compute-0 sudo[118035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:42 compute-0 ceph-mon[75358]: pgmap v325: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:42 compute-0 python3.9[118037]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 04 10:21:42 compute-0 sudo[118035]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 04 10:21:42 compute-0 sudo[118113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blifdqgxcbqxdfuhfnqjqgapcdcraegc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843701.7116942-82-208469508234381/AnsiballZ_file.py'
Dec 04 10:21:42 compute-0 sudo[118113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:42 compute-0 python3.9[118115]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:21:42 compute-0 sudo[118113]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:42 compute-0 sudo[118265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysrlhvxsoijsdenvomxautlutnytzcuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843702.7334087-105-18001226216642/AnsiballZ_file.py'
Dec 04 10:21:42 compute-0 sudo[118265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:43 compute-0 ceph-mon[75358]: 9.a scrub starts
Dec 04 10:21:43 compute-0 ceph-mon[75358]: 9.a scrub ok
Dec 04 10:21:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 04 10:21:43 compute-0 python3.9[118267]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 04 10:21:43 compute-0 sudo[118265]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:43 compute-0 sudo[118417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsmmxsvehecoeecmvnlfvnyfrrmaakum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843703.2938447-113-38038326307934/AnsiballZ_stat.py'
Dec 04 10:21:43 compute-0 sudo[118417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:43 compute-0 python3.9[118419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:43 compute-0 sudo[118417]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:44 compute-0 ceph-mon[75358]: pgmap v326: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:44 compute-0 ceph-mon[75358]: 9.4 scrub starts
Dec 04 10:21:44 compute-0 ceph-mon[75358]: 9.4 scrub ok
Dec 04 10:21:44 compute-0 sudo[118495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khbwfraawugzisazezbhbyavprtbklrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843703.2938447-113-38038326307934/AnsiballZ_file.py'
Dec 04 10:21:44 compute-0 sudo[118495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 04 10:21:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 04 10:21:44 compute-0 python3.9[118497]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:44 compute-0 sudo[118495]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:44 compute-0 sudo[118647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqesrdfrpofapxxxtfeszumzxgzhlwps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843704.4065363-125-277066983004824/AnsiballZ_stat.py'
Dec 04 10:21:44 compute-0 sudo[118647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:44 compute-0 python3.9[118649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:44 compute-0 sudo[118647]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:45 compute-0 ceph-mon[75358]: 9.1a scrub starts
Dec 04 10:21:45 compute-0 ceph-mon[75358]: 9.1a scrub ok
Dec 04 10:21:45 compute-0 sudo[118725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnnlsvurxhllanvfhbzabpfiebgsoywe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843704.4065363-125-277066983004824/AnsiballZ_file.py'
Dec 04 10:21:45 compute-0 sudo[118725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:45 compute-0 python3.9[118727]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:45 compute-0 sudo[118725]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:46 compute-0 ceph-mon[75358]: pgmap v327: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:46 compute-0 sudo[118877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmjrvezjnxkzdcrmsvsuytgumhqwbhav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843705.5102553-137-266378694349536/AnsiballZ_systemd.py'
Dec 04 10:21:46 compute-0 sudo[118877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:46 compute-0 python3.9[118879]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:21:46 compute-0 systemd[1]: Reloading.
Dec 04 10:21:46 compute-0 systemd-rc-local-generator[118907]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:21:46 compute-0 systemd-sysv-generator[118911]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:21:46 compute-0 sudo[118877]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 04 10:21:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 04 10:21:47 compute-0 ceph-mon[75358]: pgmap v328: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:47 compute-0 sudo[119066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hctaxujjznrhzwwuoyzgspgmdaiaprro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843706.9414835-145-87750549733270/AnsiballZ_stat.py'
Dec 04 10:21:47 compute-0 sudo[119066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:47 compute-0 python3.9[119068]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:47 compute-0 sudo[119066]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:47 compute-0 sudo[119144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sujrsnpoqlgqmhnubhsllztphbhhzfvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843706.9414835-145-87750549733270/AnsiballZ_file.py'
Dec 04 10:21:47 compute-0 sudo[119144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:47 compute-0 python3.9[119146]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:47 compute-0 sudo[119144]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:48 compute-0 ceph-mon[75358]: 9.12 scrub starts
Dec 04 10:21:48 compute-0 ceph-mon[75358]: 9.12 scrub ok
Dec 04 10:21:48 compute-0 sudo[119296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nslirinfeqxligzvyrqkqlfwmxxnstud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843707.9731772-157-225016758725066/AnsiballZ_stat.py'
Dec 04 10:21:48 compute-0 sudo[119296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:48 compute-0 python3.9[119298]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:48 compute-0 sudo[119296]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:48 compute-0 sudo[119374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baulbbmleodghqqqnatiotuptckryrof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843707.9731772-157-225016758725066/AnsiballZ_file.py'
Dec 04 10:21:48 compute-0 sudo[119374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:48 compute-0 python3.9[119376]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:48 compute-0 sudo[119374]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:49 compute-0 sshd-session[119430]: Invalid user debian from 107.175.213.239 port 55876
Dec 04 10:21:49 compute-0 sshd-session[119430]: Received disconnect from 107.175.213.239 port 55876:11: Bye Bye [preauth]
Dec 04 10:21:49 compute-0 sshd-session[119430]: Disconnected from invalid user debian 107.175.213.239 port 55876 [preauth]
Dec 04 10:21:49 compute-0 ceph-mon[75358]: pgmap v329: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:49 compute-0 sudo[119528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvdbeqsbrkepadhprgmnkhytrvogmasi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843708.9912865-169-113470364502634/AnsiballZ_systemd.py'
Dec 04 10:21:49 compute-0 sudo[119528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:49 compute-0 python3.9[119530]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:21:49 compute-0 systemd[1]: Reloading.
Dec 04 10:21:49 compute-0 systemd-rc-local-generator[119556]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:21:49 compute-0 systemd-sysv-generator[119561]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:21:49 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec 04 10:21:49 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec 04 10:21:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:49 compute-0 systemd[1]: Starting Create netns directory...
Dec 04 10:21:49 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 04 10:21:49 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 04 10:21:49 compute-0 systemd[1]: Finished Create netns directory.
Dec 04 10:21:49 compute-0 sudo[119528]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:50 compute-0 ceph-mon[75358]: 9.3 scrub starts
Dec 04 10:21:50 compute-0 ceph-mon[75358]: 9.3 scrub ok
Dec 04 10:21:50 compute-0 python3.9[119722]: ansible-ansible.builtin.service_facts Invoked
Dec 04 10:21:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:50 compute-0 network[119739]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 04 10:21:50 compute-0 network[119740]: 'network-scripts' will be removed from distribution in near future.
Dec 04 10:21:50 compute-0 network[119741]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 04 10:21:51 compute-0 ceph-mon[75358]: pgmap v330: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:51 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 04 10:21:51 compute-0 sshd-session[119747]: Invalid user teste from 74.249.218.27 port 36400
Dec 04 10:21:51 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 04 10:21:51 compute-0 sshd-session[119747]: Received disconnect from 74.249.218.27 port 36400:11: Bye Bye [preauth]
Dec 04 10:21:51 compute-0 sshd-session[119747]: Disconnected from invalid user teste 74.249.218.27 port 36400 [preauth]
Dec 04 10:21:52 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 04 10:21:52 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 04 10:21:52 compute-0 ceph-mon[75358]: 9.1 scrub starts
Dec 04 10:21:52 compute-0 ceph-mon[75358]: 9.1 scrub ok
Dec 04 10:21:52 compute-0 ceph-mon[75358]: 9.10 scrub starts
Dec 04 10:21:52 compute-0 ceph-mon[75358]: 9.10 scrub ok
Dec 04 10:21:52 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 04 10:21:52 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 04 10:21:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:53 compute-0 ceph-mon[75358]: 9.d scrub starts
Dec 04 10:21:53 compute-0 ceph-mon[75358]: 9.d scrub ok
Dec 04 10:21:53 compute-0 ceph-mon[75358]: pgmap v331: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:54 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 04 10:21:54 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 04 10:21:54 compute-0 ceph-mon[75358]: 9.1f scrub starts
Dec 04 10:21:54 compute-0 ceph-mon[75358]: 9.1f scrub ok
Dec 04 10:21:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:21:55 compute-0 sudo[120003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awglmayapzrpirarlhitjanfpcmbxkzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843715.054499-195-76823046945347/AnsiballZ_stat.py'
Dec 04 10:21:55 compute-0 sudo[120003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:55 compute-0 ceph-mon[75358]: pgmap v332: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:55 compute-0 python3.9[120005]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:55 compute-0 sudo[120003]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:55 compute-0 sudo[120081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxfnapdoicrvqfsaxqmaulpjqaxfgcsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843715.054499-195-76823046945347/AnsiballZ_file.py'
Dec 04 10:21:55 compute-0 sudo[120081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:55 compute-0 python3.9[120083]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:55 compute-0 sudo[120081]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:56 compute-0 sudo[120233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smikvqmratsvacvcewywjhnadudrnygo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843716.1595986-208-138254822979073/AnsiballZ_file.py'
Dec 04 10:21:56 compute-0 sudo[120233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:56 compute-0 python3.9[120235]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:56 compute-0 sudo[120233]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:56 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 04 10:21:56 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 04 10:21:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:57 compute-0 sudo[120385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahixvxjyzxhnplzdaejvdfjxkmiymqrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843716.8042905-216-227631138957000/AnsiballZ_stat.py'
Dec 04 10:21:57 compute-0 sudo[120385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:57 compute-0 python3.9[120387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:21:57 compute-0 sudo[120385]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:57 compute-0 sudo[120463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emxvypzorvswjencqrxjwrsdvqikewvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843716.8042905-216-227631138957000/AnsiballZ_file.py'
Dec 04 10:21:57 compute-0 sudo[120463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:57 compute-0 python3.9[120465]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:57 compute-0 sudo[120463]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:57 compute-0 ceph-mon[75358]: 9.9 scrub starts
Dec 04 10:21:57 compute-0 ceph-mon[75358]: 9.9 scrub ok
Dec 04 10:21:57 compute-0 ceph-mon[75358]: pgmap v333: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:21:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:21:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:21:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:21:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:21:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:21:58 compute-0 sudo[120617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtasflzwepiggncatxgqqqqediqolqsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843718.0456326-231-85403081282946/AnsiballZ_timezone.py'
Dec 04 10:21:58 compute-0 sudo[120617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:58 compute-0 sshd-session[120494]: Invalid user monitoring from 217.154.62.22 port 40586
Dec 04 10:21:58 compute-0 python3.9[120619]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 04 10:21:58 compute-0 sshd-session[120494]: Received disconnect from 217.154.62.22 port 40586:11: Bye Bye [preauth]
Dec 04 10:21:58 compute-0 sshd-session[120494]: Disconnected from invalid user monitoring 217.154.62.22 port 40586 [preauth]
Dec 04 10:21:58 compute-0 systemd[1]: Starting Time & Date Service...
Dec 04 10:21:58 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 04 10:21:58 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 04 10:21:58 compute-0 systemd[1]: Started Time & Date Service.
Dec 04 10:21:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:21:58 compute-0 sudo[120617]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:59 compute-0 ceph-mon[75358]: 9.16 scrub starts
Dec 04 10:21:59 compute-0 ceph-mon[75358]: 9.16 scrub ok
Dec 04 10:21:59 compute-0 sudo[120773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbbqseiunctictbwhxmduhijkhosfhtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843719.0830784-240-248362988529189/AnsiballZ_file.py'
Dec 04 10:21:59 compute-0 sudo[120773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:21:59 compute-0 python3.9[120775]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:21:59 compute-0 sudo[120773]: pam_unix(sudo:session): session closed for user root
Dec 04 10:21:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:00 compute-0 ceph-mon[75358]: pgmap v334: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:00 compute-0 sudo[120925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlnwupcxdayuabssxjuhrzmcaoxqsipp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843719.803547-248-127379850367131/AnsiballZ_stat.py'
Dec 04 10:22:00 compute-0 sudo[120925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:00 compute-0 python3.9[120927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:22:00 compute-0 sudo[120925]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:00 compute-0 sudo[121003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocqtuphuwthbkkdiotfcmssnmkovvnch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843719.803547-248-127379850367131/AnsiballZ_file.py'
Dec 04 10:22:00 compute-0 sudo[121003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:00 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec 04 10:22:00 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec 04 10:22:00 compute-0 python3.9[121005]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:00 compute-0 sudo[121003]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:01 compute-0 ceph-mon[75358]: 9.b scrub starts
Dec 04 10:22:01 compute-0 ceph-mon[75358]: 9.b scrub ok
Dec 04 10:22:01 compute-0 sudo[121155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpsrylrbxjsmejqajmopighgnrosnjbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843721.0127387-260-32108339737962/AnsiballZ_stat.py'
Dec 04 10:22:01 compute-0 sudo[121155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:01 compute-0 python3.9[121157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:22:01 compute-0 sudo[121155]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:01 compute-0 sudo[121233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkjuhjenrcddxnlkpjcswjlqimczumeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843721.0127387-260-32108339737962/AnsiballZ_file.py'
Dec 04 10:22:01 compute-0 sudo[121233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:01 compute-0 python3.9[121235]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6dm2prbk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:01 compute-0 sudo[121233]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:02 compute-0 ceph-mon[75358]: pgmap v335: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:02 compute-0 sudo[121385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcrmxsexcfgmqhzlxwmjaepsrnmeytvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843722.050498-272-156101019233618/AnsiballZ_stat.py'
Dec 04 10:22:02 compute-0 sudo[121385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:02 compute-0 python3.9[121387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:22:02 compute-0 sudo[121385]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:02 compute-0 sudo[121463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkwvhtdnxsywkvtwipjytxsusjcokaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843722.050498-272-156101019233618/AnsiballZ_file.py'
Dec 04 10:22:02 compute-0 sudo[121463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:02 compute-0 python3.9[121465]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:02 compute-0 sudo[121463]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:03 compute-0 sudo[121615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwawbkahmlicixrhiduabbcutappxiti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843723.1547747-285-146264886529100/AnsiballZ_command.py'
Dec 04 10:22:03 compute-0 sudo[121615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:03 compute-0 python3.9[121617]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:22:03 compute-0 sudo[121615]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:04 compute-0 ceph-mon[75358]: pgmap v336: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:04 compute-0 sudo[121768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtryurjlyvjvpxjnqamcohqlhpvgsab ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764843723.9917004-293-119998614965066/AnsiballZ_edpm_nftables_from_files.py'
Dec 04 10:22:04 compute-0 sudo[121768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:04 compute-0 python3[121770]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 04 10:22:04 compute-0 sudo[121768]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:05 compute-0 sudo[121920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmxfcmoijomfwtbyqechbslaingdouqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843724.9571748-301-108409640744552/AnsiballZ_stat.py'
Dec 04 10:22:05 compute-0 sudo[121920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:05 compute-0 python3.9[121922]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:22:05 compute-0 sudo[121920]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:05 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 04 10:22:05 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 04 10:22:05 compute-0 sudo[121998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knngonsqmrgpvpemkmjfvjuqsoxveypv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843724.9571748-301-108409640744552/AnsiballZ_file.py'
Dec 04 10:22:05 compute-0 sudo[121998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:06 compute-0 python3.9[122000]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:06 compute-0 sudo[121998]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:06 compute-0 ceph-mon[75358]: pgmap v337: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:06 compute-0 ceph-mon[75358]: 9.5 scrub starts
Dec 04 10:22:06 compute-0 ceph-mon[75358]: 9.5 scrub ok
Dec 04 10:22:06 compute-0 sudo[122150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijzilrcsyozawmimjdazgvcjxsqrrlgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843726.2312324-313-161504688384568/AnsiballZ_stat.py'
Dec 04 10:22:06 compute-0 sudo[122150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:06 compute-0 python3.9[122152]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:22:06 compute-0 sudo[122150]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:06 compute-0 sudo[122228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siupzzpchytnlribglylvatvglzzugfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843726.2312324-313-161504688384568/AnsiballZ_file.py'
Dec 04 10:22:07 compute-0 sudo[122228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:07 compute-0 python3.9[122230]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:07 compute-0 sudo[122228]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:07 compute-0 sudo[122380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxgzucqdgrvydlkxgmkskccrswicmten ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843727.3624682-325-129548694284675/AnsiballZ_stat.py'
Dec 04 10:22:07 compute-0 sudo[122380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:07 compute-0 python3.9[122382]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:22:07 compute-0 sudo[122380]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:08 compute-0 sudo[122458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fakzjvasaddvxqpxexegofuwycsspcdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843727.3624682-325-129548694284675/AnsiballZ_file.py'
Dec 04 10:22:08 compute-0 sudo[122458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:08 compute-0 ceph-mon[75358]: pgmap v338: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:08 compute-0 python3.9[122460]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:08 compute-0 sudo[122458]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:08 compute-0 sudo[122610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmwpwbfpqrvvfblqbllzzexgddmpgcrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843728.4470072-337-89357714243917/AnsiballZ_stat.py'
Dec 04 10:22:08 compute-0 sudo[122610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:08 compute-0 python3.9[122612]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:22:08 compute-0 sudo[122610]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:09 compute-0 ceph-mon[75358]: pgmap v339: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:09 compute-0 sudo[122688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-junjrlpyprbyrlojdfjnstrjfasmgtpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843728.4470072-337-89357714243917/AnsiballZ_file.py'
Dec 04 10:22:09 compute-0 sudo[122688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:09 compute-0 python3.9[122690]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:09 compute-0 sudo[122688]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:09 compute-0 sudo[122840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwvmzxlzvwwuiaphbbqhlhusqdkhxzev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843729.5788379-349-106603063435944/AnsiballZ_stat.py'
Dec 04 10:22:09 compute-0 sudo[122840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:10 compute-0 python3.9[122842]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:22:10 compute-0 sudo[122840]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:10 compute-0 sudo[122918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvocnzftylxncvsxqnzpglzychhmxlso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843729.5788379-349-106603063435944/AnsiballZ_file.py'
Dec 04 10:22:10 compute-0 sudo[122918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:10 compute-0 python3.9[122920]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:10 compute-0 sudo[122918]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:11 compute-0 sudo[123070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swacxsltbtsavxjpevatkacijcuyemph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843730.7976685-362-189320438114718/AnsiballZ_command.py'
Dec 04 10:22:11 compute-0 sudo[123070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:11 compute-0 python3.9[123072]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:22:11 compute-0 sudo[123070]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:11 compute-0 ceph-mon[75358]: pgmap v340: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:11 compute-0 sudo[123225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdsgrlkmjdqubkgvdcgwqpxcvjbgkgpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843731.4977107-370-7145208096918/AnsiballZ_blockinfile.py'
Dec 04 10:22:11 compute-0 sudo[123225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:12 compute-0 python3.9[123227]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:12 compute-0 sudo[123225]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:12 compute-0 sudo[123377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjxzbrpmlqtyspsfxpbxvosqsldhjkwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843732.4397984-379-233837859266305/AnsiballZ_file.py'
Dec 04 10:22:12 compute-0 sudo[123377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:12 compute-0 python3.9[123379]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:12 compute-0 sudo[123377]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:13 compute-0 sudo[123529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrmuurfegbulkzfxjvakkycraaeiqjhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843733.0744958-379-268457101249793/AnsiballZ_file.py'
Dec 04 10:22:13 compute-0 sudo[123529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:13 compute-0 python3.9[123531]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:13 compute-0 sudo[123529]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:13 compute-0 ceph-mon[75358]: pgmap v341: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:14 compute-0 sudo[123681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvtlwhsdsihwwdskzbxcchvsbcaorusd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843733.76062-394-178729425131633/AnsiballZ_mount.py'
Dec 04 10:22:14 compute-0 sudo[123681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:14 compute-0 python3.9[123683]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 04 10:22:14 compute-0 sudo[123681]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:14 compute-0 sudo[123833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyapucqojduydahumirnasmjysbtlrwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843734.6450992-394-139207452255853/AnsiballZ_mount.py'
Dec 04 10:22:14 compute-0 sudo[123833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:15 compute-0 python3.9[123835]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 04 10:22:15 compute-0 sudo[123833]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:15 compute-0 sshd-session[116714]: Connection closed by 192.168.122.30 port 33022
Dec 04 10:22:15 compute-0 sshd-session[116711]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:22:15 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec 04 10:22:15 compute-0 systemd[1]: session-40.scope: Consumed 29.576s CPU time.
Dec 04 10:22:15 compute-0 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Dec 04 10:22:15 compute-0 systemd-logind[798]: Removed session 40.
Dec 04 10:22:15 compute-0 ceph-mon[75358]: pgmap v342: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:17 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 04 10:22:17 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 04 10:22:17 compute-0 ceph-mon[75358]: pgmap v343: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:18 compute-0 ceph-mon[75358]: 9.11 scrub starts
Dec 04 10:22:18 compute-0 ceph-mon[75358]: 9.11 scrub ok
Dec 04 10:22:19 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 04 10:22:19 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 04 10:22:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:19 compute-0 ceph-mon[75358]: pgmap v344: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:19 compute-0 ceph-mon[75358]: 9.1e scrub starts
Dec 04 10:22:19 compute-0 ceph-mon[75358]: 9.1e scrub ok
Dec 04 10:22:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:20 compute-0 sshd-session[123860]: Accepted publickey for zuul from 192.168.122.30 port 39084 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:22:20 compute-0 systemd-logind[798]: New session 41 of user zuul.
Dec 04 10:22:20 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 04 10:22:20 compute-0 sshd-session[123860]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:22:21 compute-0 sudo[124013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkocvxvodecuepbxzmummwqoqjvoxnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843740.9856784-16-219557200866804/AnsiballZ_tempfile.py'
Dec 04 10:22:21 compute-0 sudo[124013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:21 compute-0 python3.9[124015]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 04 10:22:21 compute-0 sudo[124013]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:21 compute-0 ceph-mon[75358]: pgmap v345: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:22 compute-0 sudo[124165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkyxtdsuolgmjafvlzneibdhgvfhhkzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843741.7760825-28-192343009658816/AnsiballZ_stat.py'
Dec 04 10:22:22 compute-0 sudo[124165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:22 compute-0 python3.9[124167]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:22:22 compute-0 sudo[124168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:22:22 compute-0 sudo[124168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:22:22 compute-0 sudo[124168]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:22 compute-0 sudo[124165]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:22 compute-0 sudo[124195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:22:22 compute-0 sudo[124195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:22:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:23 compute-0 sudo[124401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glkxaefzabkmngkreqelejrcflsfkudv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843742.6123815-36-117545791217563/AnsiballZ_slurp.py'
Dec 04 10:22:23 compute-0 sudo[124195]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:23 compute-0 sudo[124401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:22:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:22:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:22:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:22:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:22:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:22:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:22:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:22:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:22:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:22:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:22:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:22:23 compute-0 sudo[124404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:22:23 compute-0 sudo[124404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:22:23 compute-0 sudo[124404]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:23 compute-0 sudo[124429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:22:23 compute-0 sudo[124429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:22:23 compute-0 python3.9[124403]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 04 10:22:23 compute-0 sudo[124401]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:23 compute-0 podman[124534]: 2025-12-04 10:22:23.444551028 +0000 UTC m=+0.039468045 container create 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:22:23 compute-0 systemd[1]: Started libpod-conmon-4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0.scope.
Dec 04 10:22:23 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:22:23 compute-0 podman[124534]: 2025-12-04 10:22:23.519635782 +0000 UTC m=+0.114552829 container init 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 04 10:22:23 compute-0 podman[124534]: 2025-12-04 10:22:23.424927839 +0000 UTC m=+0.019844886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:22:23 compute-0 podman[124534]: 2025-12-04 10:22:23.526321049 +0000 UTC m=+0.121238066 container start 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:22:23 compute-0 podman[124534]: 2025-12-04 10:22:23.529210651 +0000 UTC m=+0.124127668 container attach 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 04 10:22:23 compute-0 condescending_mirzakhani[124579]: 167 167
Dec 04 10:22:23 compute-0 systemd[1]: libpod-4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0.scope: Deactivated successfully.
Dec 04 10:22:23 compute-0 podman[124534]: 2025-12-04 10:22:23.532891962 +0000 UTC m=+0.127808979 container died 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:22:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed47030be6afc3810741e9198233cfebcebc612601eb9ae88719a5aec5613381-merged.mount: Deactivated successfully.
Dec 04 10:22:23 compute-0 podman[124534]: 2025-12-04 10:22:23.588663014 +0000 UTC m=+0.183580031 container remove 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:22:23 compute-0 systemd[1]: libpod-conmon-4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0.scope: Deactivated successfully.
Dec 04 10:22:23 compute-0 sudo[124648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkkdekfxkbadhgylnavxbcbvijzxczfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843743.3415232-44-243326432585835/AnsiballZ_stat.py'
Dec 04 10:22:23 compute-0 sudo[124648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:23 compute-0 podman[124658]: 2025-12-04 10:22:23.778291616 +0000 UTC m=+0.072109501 container create aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:22:23 compute-0 python3.9[124652]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.h5jwro0g follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:22:23 compute-0 systemd[1]: Started libpod-conmon-aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978.scope.
Dec 04 10:22:23 compute-0 sudo[124648]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:23 compute-0 podman[124658]: 2025-12-04 10:22:23.730896993 +0000 UTC m=+0.024714898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:22:23 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:22:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:23 compute-0 podman[124658]: 2025-12-04 10:22:23.871402248 +0000 UTC m=+0.165220163 container init aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:22:23 compute-0 podman[124658]: 2025-12-04 10:22:23.881918731 +0000 UTC m=+0.175736616 container start aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:22:23 compute-0 podman[124658]: 2025-12-04 10:22:23.892132386 +0000 UTC m=+0.185950301 container attach aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:22:23 compute-0 ceph-mon[75358]: pgmap v346: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:22:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:22:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:22:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:22:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:22:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:22:24 compute-0 sudo[124813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eodsvzuuvanjywptkcymnuxcpznyjdnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843743.3415232-44-243326432585835/AnsiballZ_copy.py'
Dec 04 10:22:24 compute-0 sudo[124813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:24 compute-0 quirky_galileo[124676]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:22:24 compute-0 quirky_galileo[124676]: --> All data devices are unavailable
Dec 04 10:22:24 compute-0 systemd[1]: libpod-aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978.scope: Deactivated successfully.
Dec 04 10:22:24 compute-0 podman[124658]: 2025-12-04 10:22:24.353214449 +0000 UTC m=+0.647032334 container died aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 04 10:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da-merged.mount: Deactivated successfully.
Dec 04 10:22:24 compute-0 podman[124658]: 2025-12-04 10:22:24.436735954 +0000 UTC m=+0.730553839 container remove aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:22:24 compute-0 systemd[1]: libpod-conmon-aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978.scope: Deactivated successfully.
Dec 04 10:22:24 compute-0 python3.9[124815]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.h5jwro0g mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843743.3415232-44-243326432585835/.source.h5jwro0g _original_basename=.lwcmjfx6 follow=False checksum=10f9bff719ccd38a8a0d0cdbb472b912e28b2576 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:24 compute-0 sudo[124429]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:24 compute-0 sudo[124813]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:24 compute-0 sudo[124833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:22:24 compute-0 sudo[124833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:22:24 compute-0 sudo[124833]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:24 compute-0 sudo[124882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:22:24 compute-0 sudo[124882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:22:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:24 compute-0 podman[124971]: 2025-12-04 10:22:24.890664799 +0000 UTC m=+0.039887426 container create 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:22:24 compute-0 systemd[1]: Started libpod-conmon-04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77.scope.
Dec 04 10:22:24 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:22:24 compute-0 podman[124971]: 2025-12-04 10:22:24.873680965 +0000 UTC m=+0.022903592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:22:24 compute-0 podman[124971]: 2025-12-04 10:22:24.973471455 +0000 UTC m=+0.122694092 container init 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:22:24 compute-0 podman[124971]: 2025-12-04 10:22:24.981798163 +0000 UTC m=+0.131020780 container start 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:22:24 compute-0 podman[124971]: 2025-12-04 10:22:24.985254349 +0000 UTC m=+0.134476986 container attach 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:22:24 compute-0 inspiring_carver[124987]: 167 167
Dec 04 10:22:24 compute-0 systemd[1]: libpod-04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77.scope: Deactivated successfully.
Dec 04 10:22:24 compute-0 podman[124971]: 2025-12-04 10:22:24.988400778 +0000 UTC m=+0.137623405 container died 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 04 10:22:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b74a20630c1cb587188436e485f8d656fd427b43539727c5ab70a99c2ec5c0c-merged.mount: Deactivated successfully.
Dec 04 10:22:25 compute-0 podman[124971]: 2025-12-04 10:22:25.028173059 +0000 UTC m=+0.177395676 container remove 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:22:25 compute-0 systemd[1]: libpod-conmon-04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77.scope: Deactivated successfully.
Dec 04 10:22:25 compute-0 podman[125057]: 2025-12-04 10:22:25.191616258 +0000 UTC m=+0.047048206 container create f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:22:25 compute-0 systemd[1]: Started libpod-conmon-f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c.scope.
Dec 04 10:22:25 compute-0 sudo[125097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iimcnatlnyqxmrmyeljrbyqljpfcghiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843744.6473858-59-114073013605580/AnsiballZ_setup.py'
Dec 04 10:22:25 compute-0 sudo[125097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:25 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:22:25 compute-0 podman[125057]: 2025-12-04 10:22:25.168959192 +0000 UTC m=+0.024391170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:25 compute-0 podman[125057]: 2025-12-04 10:22:25.280384542 +0000 UTC m=+0.135816530 container init f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:22:25 compute-0 podman[125057]: 2025-12-04 10:22:25.287682545 +0000 UTC m=+0.143114503 container start f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:22:25 compute-0 podman[125057]: 2025-12-04 10:22:25.294400922 +0000 UTC m=+0.149832880 container attach f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 04 10:22:25 compute-0 python3.9[125104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]: {
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:     "0": [
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:         {
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "devices": [
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "/dev/loop3"
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             ],
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_name": "ceph_lv0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_size": "21470642176",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "name": "ceph_lv0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "tags": {
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.cluster_name": "ceph",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.crush_device_class": "",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.encrypted": "0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.objectstore": "bluestore",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.osd_id": "0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.type": "block",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.vdo": "0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.with_tpm": "0"
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             },
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "type": "block",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "vg_name": "ceph_vg0"
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:         }
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:     ],
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:     "1": [
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:         {
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "devices": [
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "/dev/loop4"
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             ],
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_name": "ceph_lv1",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_size": "21470642176",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:22:25 compute-0 sudo[125097]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "name": "ceph_lv1",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "tags": {
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.cluster_name": "ceph",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.crush_device_class": "",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.encrypted": "0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.objectstore": "bluestore",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.osd_id": "1",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.type": "block",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.vdo": "0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.with_tpm": "0"
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             },
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "type": "block",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "vg_name": "ceph_vg1"
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:         }
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:     ],
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:     "2": [
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:         {
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "devices": [
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "/dev/loop5"
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             ],
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_name": "ceph_lv2",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_size": "21470642176",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "name": "ceph_lv2",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "tags": {
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.cluster_name": "ceph",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.crush_device_class": "",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.encrypted": "0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.objectstore": "bluestore",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.osd_id": "2",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.type": "block",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.vdo": "0",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:                 "ceph.with_tpm": "0"
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             },
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "type": "block",
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:             "vg_name": "ceph_vg2"
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:         }
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]:     ]
Dec 04 10:22:25 compute-0 distracted_bhabha[125102]: }
Dec 04 10:22:25 compute-0 systemd[1]: libpod-f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c.scope: Deactivated successfully.
Dec 04 10:22:25 compute-0 podman[125057]: 2025-12-04 10:22:25.604624612 +0000 UTC m=+0.460056560 container died f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:22:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250-merged.mount: Deactivated successfully.
Dec 04 10:22:25 compute-0 podman[125057]: 2025-12-04 10:22:25.673003418 +0000 UTC m=+0.528435376 container remove f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 04 10:22:25 compute-0 systemd[1]: libpod-conmon-f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c.scope: Deactivated successfully.
Dec 04 10:22:25 compute-0 sudo[124882]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:25 compute-0 sudo[125142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:22:25 compute-0 sudo[125142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:22:25 compute-0 sudo[125142]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:25 compute-0 sudo[125172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:22:25 compute-0 sudo[125172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:22:25 compute-0 ceph-mon[75358]: pgmap v347: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:26 compute-0 podman[125261]: 2025-12-04 10:22:26.13881404 +0000 UTC m=+0.024804750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:22:26 compute-0 sudo[125348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hopcuernqgyouwsebcofqzgzcropvzev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843745.8789082-68-60368811594371/AnsiballZ_blockinfile.py'
Dec 04 10:22:26 compute-0 sudo[125348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:26 compute-0 podman[125261]: 2025-12-04 10:22:26.352402819 +0000 UTC m=+0.238393499 container create 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:22:26 compute-0 systemd[1]: Started libpod-conmon-9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93.scope.
Dec 04 10:22:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:22:26 compute-0 podman[125261]: 2025-12-04 10:22:26.45465748 +0000 UTC m=+0.340648180 container init 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:22:26 compute-0 podman[125261]: 2025-12-04 10:22:26.461067491 +0000 UTC m=+0.347058171 container start 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:22:26 compute-0 podman[125261]: 2025-12-04 10:22:26.464502466 +0000 UTC m=+0.350493166 container attach 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:22:26 compute-0 peaceful_bell[125353]: 167 167
Dec 04 10:22:26 compute-0 systemd[1]: libpod-9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93.scope: Deactivated successfully.
Dec 04 10:22:26 compute-0 podman[125261]: 2025-12-04 10:22:26.466377592 +0000 UTC m=+0.352368272 container died 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 04 10:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e3f7396a9f9aa3b78c185cdd8a113bd75edc84cbc98630e733e6fa8ee97261d-merged.mount: Deactivated successfully.
Dec 04 10:22:26 compute-0 python3.9[125350]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBaDrGsfyH66GeTPneOf4P9cqhJJcxgP3bu0E7RAjEstx4o7NevlnfodrpsWI3GhJ5z8ru5yYrnT8gj6K/RfM5zjWXW+Ul4lDWJ1UnIBsqOM+qHdwpyOanGFwsD1SStOqDLQRPhop1d9LdePkBXvJSXJ80Mpcjwm1bfGwN/fJl8zLFWskfkIYThTGAzthtkHNPXQXTBX+VOKpcthU/qN5CP8Y/w/9w96vwq/0dHExjueOOk28BTWEQCwxPpkb1Wrd6hQ3KYnZye2JOZh3qqNaX44hPg8VLhv3agVerNv6vRiI2EbdHHYD2I5gXfV7bQGhRzhpFEZm2DfYLr5b8H1kG9ocx3KHW2+TctXCO2hCdJhjjuQQb033in90uXPuMsEEvmtCnc5vbJ5DKpgiaJysNZhmTkpKiJ4UVa6HeBh3riio7zeHc3bjI/1AD1cejpy6OEoWwk/X8ydA6bau1ApGvoHoEAXhlES4J/a6CUovnch+uMkircx8hJcYthuNhJIk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBhSkNncUNzxmzyjy22XSoHmC2WfRWk9PEzKRLlibq2
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBeg0yEcOxT9ax0vZC/VGcWoLt2isE/U7UTL1uRpP8q51Um5h2uaP4tcFVGL1g6uXlC20O3SCTRskwpUg5sj6I=
                                              create=True mode=0644 path=/tmp/ansible.h5jwro0g state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:26 compute-0 podman[125261]: 2025-12-04 10:22:26.506793601 +0000 UTC m=+0.392784301 container remove 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 10:22:26 compute-0 systemd[1]: libpod-conmon-9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93.scope: Deactivated successfully.
Dec 04 10:22:26 compute-0 sudo[125348]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:26 compute-0 podman[125402]: 2025-12-04 10:22:26.656591968 +0000 UTC m=+0.039875215 container create fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:22:26 compute-0 systemd[1]: Started libpod-conmon-fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef.scope.
Dec 04 10:22:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:22:26
Dec 04 10:22:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:22:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:22:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups']
Dec 04 10:22:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:22:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:26 compute-0 podman[125402]: 2025-12-04 10:22:26.639989404 +0000 UTC m=+0.023272671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:22:26 compute-0 podman[125402]: 2025-12-04 10:22:26.745004284 +0000 UTC m=+0.128287551 container init fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Dec 04 10:22:26 compute-0 podman[125402]: 2025-12-04 10:22:26.75323768 +0000 UTC m=+0.136520927 container start fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:22:26 compute-0 podman[125402]: 2025-12-04 10:22:26.755669321 +0000 UTC m=+0.138952568 container attach fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 04 10:22:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:27 compute-0 sudo[125559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wikmmmpdtiqmnpenchdxwvgpoxmfbzhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843746.6534567-76-257459853142027/AnsiballZ_command.py'
Dec 04 10:22:27 compute-0 sudo[125559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:27 compute-0 python3.9[125563]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.h5jwro0g' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:22:27 compute-0 sudo[125559]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:27 compute-0 lvm[125674]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:22:27 compute-0 lvm[125674]: VG ceph_vg0 finished
Dec 04 10:22:27 compute-0 lvm[125675]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:22:27 compute-0 lvm[125675]: VG ceph_vg1 finished
Dec 04 10:22:27 compute-0 lvm[125681]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:22:27 compute-0 lvm[125681]: VG ceph_vg2 finished
Dec 04 10:22:27 compute-0 gracious_meitner[125466]: {}
Dec 04 10:22:27 compute-0 systemd[1]: libpod-fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef.scope: Deactivated successfully.
Dec 04 10:22:27 compute-0 systemd[1]: libpod-fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef.scope: Consumed 1.379s CPU time.
Dec 04 10:22:27 compute-0 podman[125402]: 2025-12-04 10:22:27.645990554 +0000 UTC m=+1.029273801 container died fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a-merged.mount: Deactivated successfully.
Dec 04 10:22:27 compute-0 podman[125402]: 2025-12-04 10:22:27.68991034 +0000 UTC m=+1.073193587 container remove fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:22:27 compute-0 systemd[1]: libpod-conmon-fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef.scope: Deactivated successfully.
Dec 04 10:22:27 compute-0 sudo[125172]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:22:27 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:22:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:22:27 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:22:27 compute-0 sudo[125743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:22:27 compute-0 sudo[125743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:22:27 compute-0 sudo[125743]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:27 compute-0 sudo[125820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbtbgqkrqxokfuweucjtbgtppilwvtgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843747.4735975-84-220641922733793/AnsiballZ_file.py'
Dec 04 10:22:27 compute-0 sudo[125820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:22:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:22:28 compute-0 ceph-mon[75358]: pgmap v348: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:22:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:22:28 compute-0 python3.9[125822]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.h5jwro0g state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:28 compute-0 sudo[125820]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:28 compute-0 sshd-session[123863]: Connection closed by 192.168.122.30 port 39084
Dec 04 10:22:28 compute-0 sshd-session[123860]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:22:28 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 04 10:22:28 compute-0 systemd[1]: session-41.scope: Consumed 4.800s CPU time.
Dec 04 10:22:28 compute-0 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Dec 04 10:22:28 compute-0 systemd-logind[798]: Removed session 41.
Dec 04 10:22:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 04 10:22:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:30 compute-0 ceph-mon[75358]: pgmap v349: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:32 compute-0 ceph-mon[75358]: pgmap v350: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:33 compute-0 sshd-session[125850]: Accepted publickey for zuul from 192.168.122.30 port 55102 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:22:33 compute-0 systemd-logind[798]: New session 42 of user zuul.
Dec 04 10:22:33 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 04 10:22:33 compute-0 sshd-session[125850]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:22:34 compute-0 ceph-mon[75358]: pgmap v351: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:34 compute-0 python3.9[126003]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:22:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:35 compute-0 sudo[126157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpeayxtmffmbavzafgipnsmdypgywsur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843755.1694012-32-102128120744403/AnsiballZ_systemd.py'
Dec 04 10:22:35 compute-0 sudo[126157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:36 compute-0 python3.9[126159]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 04 10:22:36 compute-0 ceph-mon[75358]: pgmap v352: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:36 compute-0 sudo[126157]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:36 compute-0 sudo[126311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shecfqnchntdzoaxqncjawaeoohcqnde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843756.2598188-40-30004161057513/AnsiballZ_systemd.py'
Dec 04 10:22:36 compute-0 sudo[126311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:36 compute-0 python3.9[126313]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:36 compute-0 sudo[126311]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:22:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:22:37 compute-0 ceph-mon[75358]: pgmap v353: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:37 compute-0 sudo[126464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oleddajynsyruldeesdltmwrxcctrtnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843757.0397034-49-245946972388156/AnsiballZ_command.py'
Dec 04 10:22:37 compute-0 sudo[126464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:37 compute-0 python3.9[126466]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:22:37 compute-0 sudo[126464]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:38 compute-0 sudo[126617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqzkmkldsxvzuqpdcfzjualxqxyfpugp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843757.8546495-57-186750166954813/AnsiballZ_stat.py'
Dec 04 10:22:38 compute-0 sudo[126617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:38 compute-0 python3.9[126619]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:22:38 compute-0 sudo[126617]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:39 compute-0 sudo[126769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwbsojjeolgvssrpurspgsgfpjvzvvqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843758.5972981-66-43815264416027/AnsiballZ_file.py'
Dec 04 10:22:39 compute-0 sudo[126769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:39 compute-0 python3.9[126771]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:22:39 compute-0 sudo[126769]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:39 compute-0 sshd-session[125853]: Connection closed by 192.168.122.30 port 55102
Dec 04 10:22:39 compute-0 sshd-session[125850]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:22:39 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 04 10:22:39 compute-0 systemd[1]: session-42.scope: Consumed 3.575s CPU time.
Dec 04 10:22:39 compute-0 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Dec 04 10:22:39 compute-0 systemd-logind[798]: Removed session 42.
Dec 04 10:22:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:40 compute-0 ceph-mon[75358]: pgmap v354: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:41 compute-0 ceph-mon[75358]: pgmap v355: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:43 compute-0 ceph-mon[75358]: pgmap v356: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:44 compute-0 sshd-session[126796]: Accepted publickey for zuul from 192.168.122.30 port 55604 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:22:44 compute-0 systemd-logind[798]: New session 43 of user zuul.
Dec 04 10:22:44 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 04 10:22:44 compute-0 sshd-session[126796]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:22:45 compute-0 sshd-session[71537]: Received disconnect from 38.102.83.189 port 60740:11: disconnected by user
Dec 04 10:22:45 compute-0 sshd-session[71537]: Disconnected from user zuul 38.102.83.189 port 60740
Dec 04 10:22:45 compute-0 sshd-session[71534]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:22:45 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 04 10:22:45 compute-0 systemd[1]: session-18.scope: Consumed 1min 49.879s CPU time.
Dec 04 10:22:45 compute-0 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Dec 04 10:22:45 compute-0 systemd-logind[798]: Removed session 18.
Dec 04 10:22:45 compute-0 ceph-mon[75358]: pgmap v357: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:45 compute-0 python3.9[126949]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:22:46 compute-0 sudo[127103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtwijjiavyhdietrqpvysesvhiowpuql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843766.3669865-34-105240695132216/AnsiballZ_setup.py'
Dec 04 10:22:46 compute-0 sudo[127103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:46 compute-0 python3.9[127105]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:22:47 compute-0 sudo[127103]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:47 compute-0 sudo[127187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxrazmylcniakhrnoaxseixmnjazkibv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843766.3669865-34-105240695132216/AnsiballZ_dnf.py'
Dec 04 10:22:47 compute-0 sudo[127187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:22:47 compute-0 python3.9[127189]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 04 10:22:47 compute-0 ceph-mon[75358]: pgmap v358: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:49 compute-0 sudo[127187]: pam_unix(sudo:session): session closed for user root
Dec 04 10:22:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:49 compute-0 python3.9[127340]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:22:49 compute-0 ceph-mon[75358]: pgmap v359: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:51 compute-0 python3.9[127491]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 04 10:22:51 compute-0 python3.9[127641]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:22:52 compute-0 ceph-mon[75358]: pgmap v360: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:52 compute-0 python3.9[127793]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:22:52 compute-0 sshd-session[126799]: Connection closed by 192.168.122.30 port 55604
Dec 04 10:22:52 compute-0 sshd-session[126796]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:22:52 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 04 10:22:52 compute-0 systemd[1]: session-43.scope: Consumed 5.679s CPU time.
Dec 04 10:22:52 compute-0 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Dec 04 10:22:52 compute-0 systemd-logind[798]: Removed session 43.
Dec 04 10:22:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:53 compute-0 sshd-session[127718]: Invalid user pzuser from 103.149.86.230 port 40726
Dec 04 10:22:53 compute-0 ceph-mon[75358]: pgmap v361: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:53 compute-0 sshd-session[127718]: Received disconnect from 103.149.86.230 port 40726:11: Bye Bye [preauth]
Dec 04 10:22:53 compute-0 sshd-session[127718]: Disconnected from invalid user pzuser 103.149.86.230 port 40726 [preauth]
Dec 04 10:22:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:22:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:55 compute-0 sshd-session[127818]: Invalid user gns3 from 103.179.218.243 port 41694
Dec 04 10:22:55 compute-0 sshd-session[127818]: Received disconnect from 103.179.218.243 port 41694:11: Bye Bye [preauth]
Dec 04 10:22:55 compute-0 sshd-session[127818]: Disconnected from invalid user gns3 103.179.218.243 port 41694 [preauth]
Dec 04 10:22:56 compute-0 ceph-mon[75358]: pgmap v362: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:57 compute-0 ceph-mon[75358]: pgmap v363: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:22:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:22:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:22:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:22:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:22:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:22:58 compute-0 sshd-session[127820]: Accepted publickey for zuul from 192.168.122.30 port 56488 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:22:58 compute-0 systemd-logind[798]: New session 44 of user zuul.
Dec 04 10:22:58 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 04 10:22:58 compute-0 sshd-session[127820]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:22:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:22:59 compute-0 python3.9[127973]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:23:00 compute-0 sudo[128127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suohffsicjpvwmacszrbfhtxueubssod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843780.0776284-50-242892282323973/AnsiballZ_file.py'
Dec 04 10:23:00 compute-0 sudo[128127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:00 compute-0 python3.9[128129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:01 compute-0 sudo[128127]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:01 compute-0 sudo[128279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chprotwctofvsjcdraoofkyrfmcippwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843781.1142652-50-137898616485835/AnsiballZ_file.py'
Dec 04 10:23:01 compute-0 sudo[128279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:01 compute-0 ceph-mon[75358]: pgmap v364: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:01 compute-0 python3.9[128281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:01 compute-0 sudo[128279]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:02 compute-0 sudo[128431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgnwymeuwmayzxyvjhibmomhvzqgxlij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843781.7983155-65-106733408814055/AnsiballZ_stat.py'
Dec 04 10:23:02 compute-0 sudo[128431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:03 compute-0 python3.9[128433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:03 compute-0 sudo[128431]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:03 compute-0 ceph-mon[75358]: pgmap v365: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:03 compute-0 sudo[128554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqafmxuahqdlpwxfuoofgkqzsxtwkyuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843781.7983155-65-106733408814055/AnsiballZ_copy.py'
Dec 04 10:23:03 compute-0 sudo[128554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:03 compute-0 python3.9[128556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843781.7983155-65-106733408814055/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=81511a8c029290643b20bb87c9f35389df2dbe4b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:03 compute-0 sudo[128554]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:04 compute-0 sudo[128706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azeolabvqpflojhonsorxntpxcbgkaut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843783.8368237-65-13810163128928/AnsiballZ_stat.py'
Dec 04 10:23:04 compute-0 sudo[128706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:04 compute-0 python3.9[128708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:04 compute-0 sudo[128706]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:04 compute-0 sudo[128829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avjgfqsahxqdffnzatergtcfuyntfipf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843783.8368237-65-13810163128928/AnsiballZ_copy.py'
Dec 04 10:23:04 compute-0 sudo[128829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:04 compute-0 python3.9[128831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843783.8368237-65-13810163128928/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c3374d4610bc0ee65063b6de1905070784021c61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:04 compute-0 sudo[128829]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:05 compute-0 sudo[128981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oynqrssceodqltbzodwkoknllcpglyky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843784.9504771-65-138905972288115/AnsiballZ_stat.py'
Dec 04 10:23:05 compute-0 sudo[128981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:05 compute-0 python3.9[128983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:05 compute-0 sudo[128981]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:05 compute-0 sudo[129104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djbmqndhlrfeeppmgwcrzmiqzmxvnmhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843784.9504771-65-138905972288115/AnsiballZ_copy.py'
Dec 04 10:23:05 compute-0 sudo[129104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:05 compute-0 ceph-mon[75358]: pgmap v366: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:06 compute-0 python3.9[129106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843784.9504771-65-138905972288115/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d43a22e1b31002b3767a06d3002da3fc47ddce6d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:06 compute-0 sudo[129104]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:06 compute-0 sudo[129256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saedeixmkzjogzojoqohtuacciasqyda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843786.2556098-109-71648949318082/AnsiballZ_file.py'
Dec 04 10:23:06 compute-0 sudo[129256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:06 compute-0 python3.9[129258]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:06 compute-0 sudo[129256]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:07 compute-0 sudo[129408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcxokkbqbyrzjfpzblsgbssznceazokr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843786.8643377-109-222739629544882/AnsiballZ_file.py'
Dec 04 10:23:07 compute-0 sudo[129408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:08 compute-0 ceph-mon[75358]: pgmap v367: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:08 compute-0 sshd-session[129411]: Received disconnect from 74.249.218.27 port 48344:11: Bye Bye [preauth]
Dec 04 10:23:08 compute-0 sshd-session[129411]: Disconnected from authenticating user root 74.249.218.27 port 48344 [preauth]
Dec 04 10:23:08 compute-0 python3.9[129410]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:08 compute-0 sudo[129408]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:08 compute-0 sudo[129562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljadfxkqryalktkrirlmgkmvlygbbohq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843788.4837463-124-195150463012650/AnsiballZ_stat.py'
Dec 04 10:23:08 compute-0 sudo[129562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:08 compute-0 python3.9[129564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:08 compute-0 sudo[129562]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:09 compute-0 ceph-mon[75358]: pgmap v368: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:09 compute-0 ceph-mon[75358]: pgmap v369: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:09 compute-0 sudo[129685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plsynivzjemoicodkqedcdzldbtzkehy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843788.4837463-124-195150463012650/AnsiballZ_copy.py'
Dec 04 10:23:09 compute-0 sudo[129685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:09 compute-0 python3.9[129687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843788.4837463-124-195150463012650/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5034f68dea2e01161d2dd1c287333174d79beab6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:09 compute-0 sudo[129685]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:09 compute-0 sudo[129837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slebtolpfqjtsngapuhzvfcdhuvwhuhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843789.5711393-124-199144796876179/AnsiballZ_stat.py'
Dec 04 10:23:09 compute-0 sudo[129837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:10 compute-0 python3.9[129839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:10 compute-0 sudo[129837]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:10 compute-0 sudo[129960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjcmpzazuldpytiieldpubreqkuuvtjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843789.5711393-124-199144796876179/AnsiballZ_copy.py'
Dec 04 10:23:10 compute-0 sudo[129960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:10 compute-0 python3.9[129962]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843789.5711393-124-199144796876179/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=bc95c030718f7b888fdaa320eb4dd80dc2a36cf0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:10 compute-0 sudo[129960]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:10 compute-0 sudo[130112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwlliiqwnmxkvjerehwsrfpagqptkkfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843790.661143-124-261133524993298/AnsiballZ_stat.py'
Dec 04 10:23:10 compute-0 sudo[130112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:11 compute-0 python3.9[130114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:11 compute-0 sudo[130112]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:11 compute-0 ceph-mon[75358]: pgmap v370: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:11 compute-0 sudo[130235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljqyrzbnrtuomimthzoqcqbngakwkqlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843790.661143-124-261133524993298/AnsiballZ_copy.py'
Dec 04 10:23:11 compute-0 sudo[130235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:11 compute-0 python3.9[130237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843790.661143-124-261133524993298/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a6000cc946b5cfb01bb3913e9a943dfd39c04e6f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:11 compute-0 sudo[130235]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:12 compute-0 sudo[130388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqiiyrqlcrkpispinkvargfqsftybsoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843791.9870188-168-204262951837303/AnsiballZ_file.py'
Dec 04 10:23:12 compute-0 sudo[130388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:12 compute-0 python3.9[130390]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:12 compute-0 sudo[130388]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:12 compute-0 sudo[130541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzjytubkthdhqujloihzwfevckbaprmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843792.6766558-168-113653791173043/AnsiballZ_file.py'
Dec 04 10:23:12 compute-0 sudo[130541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:13 compute-0 python3.9[130543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:13 compute-0 sudo[130541]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:13 compute-0 sudo[130694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwvlzoelvmkxushbeihjrfiohfmbzeua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843793.2571468-183-147177057385642/AnsiballZ_stat.py'
Dec 04 10:23:13 compute-0 sudo[130694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:13 compute-0 python3.9[130696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:13 compute-0 sudo[130694]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:13 compute-0 ceph-mon[75358]: pgmap v371: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:14 compute-0 sudo[130817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwkaunrsjvdzpsbrrbmdragcdeibzddn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843793.2571468-183-147177057385642/AnsiballZ_copy.py'
Dec 04 10:23:14 compute-0 sudo[130817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:14 compute-0 python3.9[130819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843793.2571468-183-147177057385642/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=865674be7d17eaa7bdfa20885df08114fc86c2da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:14 compute-0 sudo[130817]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:14 compute-0 sudo[130969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sonhfdsaeqykzgkhzadlwavnmbetkrgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843794.3754284-183-104863129007651/AnsiballZ_stat.py'
Dec 04 10:23:14 compute-0 sudo[130969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:14 compute-0 python3.9[130971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:14 compute-0 sudo[130969]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:15 compute-0 sudo[131092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmjwqrdqpscjrapvdthqguivizogpdvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843794.3754284-183-104863129007651/AnsiballZ_copy.py'
Dec 04 10:23:15 compute-0 sudo[131092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:15 compute-0 python3.9[131094]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843794.3754284-183-104863129007651/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=bc95c030718f7b888fdaa320eb4dd80dc2a36cf0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:15 compute-0 sudo[131092]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:15 compute-0 sudo[131244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvcgexekvmkmkpknkzpivxsvidgbxpoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843795.644625-183-63321839445067/AnsiballZ_stat.py'
Dec 04 10:23:15 compute-0 sudo[131244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:16 compute-0 python3.9[131246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:16 compute-0 sudo[131244]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:16 compute-0 ceph-mon[75358]: pgmap v372: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:16 compute-0 sudo[131367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkfdtsqshlbukupgajkqglllpyhlljhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843795.644625-183-63321839445067/AnsiballZ_copy.py'
Dec 04 10:23:16 compute-0 sudo[131367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:16 compute-0 python3.9[131369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843795.644625-183-63321839445067/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d3da86dcc4ab46f92c4982f6a65d424bec9319aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:16 compute-0 sudo[131367]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:17 compute-0 ceph-mon[75358]: pgmap v373: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:17 compute-0 sudo[131519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmyhtlaboctditwapceupgfqtbrueiif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843797.3137522-243-146716525916063/AnsiballZ_file.py'
Dec 04 10:23:17 compute-0 sudo[131519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:17 compute-0 python3.9[131521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:17 compute-0 sudo[131519]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:18 compute-0 sshd-session[130544]: Connection closed by 101.47.163.20 port 45468 [preauth]
Dec 04 10:23:18 compute-0 sudo[131672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwinqffivujtzexrnktgurhetuchbuuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843797.9484406-251-131388131631049/AnsiballZ_stat.py'
Dec 04 10:23:18 compute-0 sudo[131672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:18 compute-0 python3.9[131674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:18 compute-0 sudo[131672]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:18 compute-0 sudo[131795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adoaoganirjbzzrwyommflxiserxirvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843797.9484406-251-131388131631049/AnsiballZ_copy.py'
Dec 04 10:23:18 compute-0 sudo[131795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:18 compute-0 python3.9[131797]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843797.9484406-251-131388131631049/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:18 compute-0 sudo[131795]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:19 compute-0 sudo[131947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eerwhdxrbuwaezrqtsaxobujoryrymuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843799.1014993-267-88968300135142/AnsiballZ_file.py'
Dec 04 10:23:19 compute-0 sudo[131947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:19 compute-0 python3.9[131949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:19 compute-0 sudo[131947]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:19 compute-0 ceph-mon[75358]: pgmap v374: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:19 compute-0 sudo[132099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpwwbjwxtssjvgzofjcbeptkkevcniwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843799.7225451-275-178684544813999/AnsiballZ_stat.py'
Dec 04 10:23:19 compute-0 sudo[132099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:20 compute-0 python3.9[132101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:20 compute-0 sudo[132099]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:20 compute-0 sudo[132222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwdrwycuaibxcmyzplwbrtaydeuqcmrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843799.7225451-275-178684544813999/AnsiballZ_copy.py'
Dec 04 10:23:20 compute-0 sudo[132222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:20 compute-0 python3.9[132224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843799.7225451-275-178684544813999/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:20 compute-0 sudo[132222]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:21 compute-0 sudo[132374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogpgrnkslaypchcaimxugjjljrukicvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843800.8676946-291-158474840952905/AnsiballZ_file.py'
Dec 04 10:23:21 compute-0 sudo[132374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:21 compute-0 python3.9[132376]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:21 compute-0 sudo[132374]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:21 compute-0 sudo[132526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqqedjcdfxolmqemgczekcocdgrdqdma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843801.4869964-299-194653593042473/AnsiballZ_stat.py'
Dec 04 10:23:21 compute-0 sudo[132526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:21 compute-0 ceph-mon[75358]: pgmap v375: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:22 compute-0 python3.9[132528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:22 compute-0 sudo[132526]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:22 compute-0 sudo[132649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bptgjccabxspmhgbghnqixzhdeaeqjyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843801.4869964-299-194653593042473/AnsiballZ_copy.py'
Dec 04 10:23:22 compute-0 sudo[132649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:22 compute-0 python3.9[132651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843801.4869964-299-194653593042473/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:22 compute-0 sudo[132649]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:23 compute-0 sudo[132801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqbbrzsodecylrrkxovqpshngluvrdyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843802.7953882-315-98355122430677/AnsiballZ_file.py'
Dec 04 10:23:23 compute-0 sudo[132801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:23 compute-0 python3.9[132803]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:23 compute-0 sudo[132801]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:23 compute-0 sudo[132953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enmwtiohbmlpzjkoebxhbfzftmyavwsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843803.4043937-323-89993190549934/AnsiballZ_stat.py'
Dec 04 10:23:23 compute-0 sudo[132953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:23 compute-0 python3.9[132955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:23 compute-0 sudo[132953]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:23 compute-0 ceph-mon[75358]: pgmap v376: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:24 compute-0 sudo[133076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdivbptubwxtwgiatjvauqikplolgasp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843803.4043937-323-89993190549934/AnsiballZ_copy.py'
Dec 04 10:23:24 compute-0 sudo[133076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:24 compute-0 python3.9[133078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843803.4043937-323-89993190549934/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:24 compute-0 sudo[133076]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:24 compute-0 sudo[133228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khxltkaijvdnmmllndidzqvsdhjpltaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843804.5698738-339-2123348841297/AnsiballZ_file.py'
Dec 04 10:23:24 compute-0 sudo[133228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:25 compute-0 python3.9[133230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:25 compute-0 sudo[133228]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:25 compute-0 sudo[133380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hojjfpsydiukhqptdvwglpahxjudvouh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843805.210789-347-96209227728674/AnsiballZ_stat.py'
Dec 04 10:23:25 compute-0 sudo[133380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:25 compute-0 python3.9[133382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:25 compute-0 sudo[133380]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:26 compute-0 sudo[133503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjjsaawaaofynxfptpypbchkfswtwosp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843805.210789-347-96209227728674/AnsiballZ_copy.py'
Dec 04 10:23:26 compute-0 sudo[133503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:26 compute-0 ceph-mon[75358]: pgmap v377: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:26 compute-0 python3.9[133505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843805.210789-347-96209227728674/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:26 compute-0 sudo[133503]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.301177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806301425, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1569, "num_deletes": 251, "total_data_size": 2217000, "memory_usage": 2256840, "flush_reason": "Manual Compaction"}
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806310752, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1304364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7390, "largest_seqno": 8958, "table_properties": {"data_size": 1299180, "index_size": 2260, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15196, "raw_average_key_size": 20, "raw_value_size": 1286907, "raw_average_value_size": 1750, "num_data_blocks": 106, "num_entries": 735, "num_filter_entries": 735, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843661, "oldest_key_time": 1764843661, "file_creation_time": 1764843806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 9643 microseconds, and 4715 cpu microseconds.
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.310827) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1304364 bytes OK
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.310864) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.313526) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.313550) EVENT_LOG_v1 {"time_micros": 1764843806313541, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.313577) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2209966, prev total WAL file size 2209966, number of live WAL files 2.
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.314704) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1273KB)], [20(7320KB)]
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806314771, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8800306, "oldest_snapshot_seqno": -1}
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3397 keys, 6883511 bytes, temperature: kUnknown
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806370332, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6883511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6857900, "index_size": 16030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81480, "raw_average_key_size": 23, "raw_value_size": 6793596, "raw_average_value_size": 1999, "num_data_blocks": 710, "num_entries": 3397, "num_filter_entries": 3397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764843806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.370586) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6883511 bytes
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.371971) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.2 rd, 123.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.1 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(12.0) write-amplify(5.3) OK, records in: 3845, records dropped: 448 output_compression: NoCompression
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.371991) EVENT_LOG_v1 {"time_micros": 1764843806371981, "job": 6, "event": "compaction_finished", "compaction_time_micros": 55637, "compaction_time_cpu_micros": 21632, "output_level": 6, "num_output_files": 1, "total_output_size": 6883511, "num_input_records": 3845, "num_output_records": 3397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806372376, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806373705, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.314610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:23:26 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:23:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:23:26
Dec 04 10:23:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:23:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:23:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms', 'images', '.rgw.root']
Dec 04 10:23:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:23:26 compute-0 sudo[133655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owoeoilcijvdkdyfntfbykgbnjtuconc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843806.4628544-363-64936527335483/AnsiballZ_file.py'
Dec 04 10:23:26 compute-0 sudo[133655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:26 compute-0 python3.9[133657]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:26 compute-0 sudo[133655]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:27 compute-0 ceph-mon[75358]: pgmap v378: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:27 compute-0 sudo[133807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxsqqarzkcnibxtbhjbvsiqwyepnvwnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843807.0952694-371-277953686617442/AnsiballZ_stat.py'
Dec 04 10:23:27 compute-0 sudo[133807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:27 compute-0 python3.9[133809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:27 compute-0 sudo[133807]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:27 compute-0 sudo[133881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:23:27 compute-0 sudo[133881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:23:27 compute-0 sudo[133881]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:23:27 compute-0 sudo[133933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:23:27 compute-0 sudo[133933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:23:27 compute-0 sudo[133978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pycsxzqspwqfrbqhotghwgvwtlumcmwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843807.0952694-371-277953686617442/AnsiballZ_copy.py'
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:23:27 compute-0 sudo[133978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:23:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:23:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:23:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:23:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:23:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:23:28 compute-0 python3.9[133982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843807.0952694-371-277953686617442/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:28 compute-0 sudo[133978]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:28 compute-0 sudo[133933]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:23:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:23:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:23:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:23:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:23:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:23:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:23:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:23:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:23:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:23:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:23:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:23:28 compute-0 sshd-session[127823]: Connection closed by 192.168.122.30 port 56488
Dec 04 10:23:28 compute-0 sshd-session[127820]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:23:28 compute-0 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Dec 04 10:23:28 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 04 10:23:28 compute-0 systemd[1]: session-44.scope: Consumed 22.843s CPU time.
Dec 04 10:23:28 compute-0 systemd-logind[798]: Removed session 44.
Dec 04 10:23:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:23:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:23:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:23:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:23:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:23:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:23:28 compute-0 sudo[134037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:23:28 compute-0 sudo[134037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:23:28 compute-0 sudo[134037]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:28 compute-0 sudo[134062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:23:28 compute-0 sudo[134062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:23:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:28 compute-0 podman[134100]: 2025-12-04 10:23:28.897973475 +0000 UTC m=+0.026424195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:23:29 compute-0 podman[134100]: 2025-12-04 10:23:29.101979596 +0000 UTC m=+0.230430336 container create f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:23:29 compute-0 systemd[1]: Started libpod-conmon-f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea.scope.
Dec 04 10:23:29 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:23:29 compute-0 podman[134100]: 2025-12-04 10:23:29.540683291 +0000 UTC m=+0.669134021 container init f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:23:29 compute-0 podman[134100]: 2025-12-04 10:23:29.547976426 +0000 UTC m=+0.676427126 container start f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:23:29 compute-0 cool_lichterman[134119]: 167 167
Dec 04 10:23:29 compute-0 systemd[1]: libpod-f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea.scope: Deactivated successfully.
Dec 04 10:23:29 compute-0 podman[134100]: 2025-12-04 10:23:29.62276625 +0000 UTC m=+0.751216960 container attach f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:23:29 compute-0 podman[134100]: 2025-12-04 10:23:29.623498397 +0000 UTC m=+0.751949117 container died f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 04 10:23:29 compute-0 ceph-mon[75358]: pgmap v379: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:29 compute-0 sshd-session[134114]: Invalid user master from 217.154.62.22 port 44930
Dec 04 10:23:29 compute-0 sshd-session[134114]: Received disconnect from 217.154.62.22 port 44930:11: Bye Bye [preauth]
Dec 04 10:23:29 compute-0 sshd-session[134114]: Disconnected from invalid user master 217.154.62.22 port 44930 [preauth]
Dec 04 10:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0b7f7e3bd34fedf2d95aff95ee6a4f7517cd568a00ae2605a750644096ba854-merged.mount: Deactivated successfully.
Dec 04 10:23:30 compute-0 podman[134100]: 2025-12-04 10:23:30.079313093 +0000 UTC m=+1.207763783 container remove f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:23:30 compute-0 systemd[1]: libpod-conmon-f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea.scope: Deactivated successfully.
Dec 04 10:23:30 compute-0 podman[134144]: 2025-12-04 10:23:30.266503699 +0000 UTC m=+0.060167823 container create 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:23:30 compute-0 systemd[1]: Started libpod-conmon-98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d.scope.
Dec 04 10:23:30 compute-0 podman[134144]: 2025-12-04 10:23:30.241563142 +0000 UTC m=+0.035227316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:23:30 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:30 compute-0 podman[134144]: 2025-12-04 10:23:30.366507347 +0000 UTC m=+0.160171521 container init 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:23:30 compute-0 podman[134144]: 2025-12-04 10:23:30.376736332 +0000 UTC m=+0.170400456 container start 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:23:30 compute-0 podman[134144]: 2025-12-04 10:23:30.380601025 +0000 UTC m=+0.174265179 container attach 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:23:30 compute-0 dazzling_chandrasekhar[134160]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:23:30 compute-0 dazzling_chandrasekhar[134160]: --> All data devices are unavailable
Dec 04 10:23:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:30 compute-0 systemd[1]: libpod-98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d.scope: Deactivated successfully.
Dec 04 10:23:30 compute-0 podman[134144]: 2025-12-04 10:23:30.895707973 +0000 UTC m=+0.689372117 container died 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e-merged.mount: Deactivated successfully.
Dec 04 10:23:30 compute-0 podman[134144]: 2025-12-04 10:23:30.948692213 +0000 UTC m=+0.742356337 container remove 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:23:30 compute-0 systemd[1]: libpod-conmon-98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d.scope: Deactivated successfully.
Dec 04 10:23:30 compute-0 sudo[134062]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:31 compute-0 sudo[134191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:23:31 compute-0 sudo[134191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:23:31 compute-0 sudo[134191]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:31 compute-0 sudo[134216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:23:31 compute-0 sudo[134216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:23:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:31 compute-0 podman[134253]: 2025-12-04 10:23:31.369234323 +0000 UTC m=+0.022016269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:23:31 compute-0 podman[134253]: 2025-12-04 10:23:31.683278911 +0000 UTC m=+0.336060837 container create 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:23:31 compute-0 systemd[1]: Started libpod-conmon-71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c.scope.
Dec 04 10:23:31 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:23:31 compute-0 podman[134253]: 2025-12-04 10:23:31.771464605 +0000 UTC m=+0.424246541 container init 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:23:31 compute-0 podman[134253]: 2025-12-04 10:23:31.777306635 +0000 UTC m=+0.430088561 container start 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 04 10:23:31 compute-0 podman[134253]: 2025-12-04 10:23:31.781491775 +0000 UTC m=+0.434273701 container attach 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:23:31 compute-0 ecstatic_jones[134269]: 167 167
Dec 04 10:23:31 compute-0 systemd[1]: libpod-71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c.scope: Deactivated successfully.
Dec 04 10:23:31 compute-0 podman[134253]: 2025-12-04 10:23:31.78293271 +0000 UTC m=+0.435714646 container died 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fd8cd0b30f7fe793d36cebe86756277096abdbd5b63acbb4445ee76789f1c27-merged.mount: Deactivated successfully.
Dec 04 10:23:31 compute-0 podman[134253]: 2025-12-04 10:23:31.821519755 +0000 UTC m=+0.474301681 container remove 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Dec 04 10:23:31 compute-0 systemd[1]: libpod-conmon-71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c.scope: Deactivated successfully.
Dec 04 10:23:31 compute-0 podman[134293]: 2025-12-04 10:23:31.975733522 +0000 UTC m=+0.041161178 container create 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:23:32 compute-0 systemd[1]: Started libpod-conmon-885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95.scope.
Dec 04 10:23:32 compute-0 ceph-mon[75358]: pgmap v380: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:32 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:32 compute-0 podman[134293]: 2025-12-04 10:23:31.956145492 +0000 UTC m=+0.021573198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:23:32 compute-0 podman[134293]: 2025-12-04 10:23:32.063764622 +0000 UTC m=+0.129192288 container init 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:23:32 compute-0 podman[134293]: 2025-12-04 10:23:32.070472113 +0000 UTC m=+0.135899769 container start 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:23:32 compute-0 podman[134293]: 2025-12-04 10:23:32.073880094 +0000 UTC m=+0.139307750 container attach 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]: {
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:     "0": [
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:         {
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "devices": [
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "/dev/loop3"
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             ],
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_name": "ceph_lv0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_size": "21470642176",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "name": "ceph_lv0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "tags": {
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.cluster_name": "ceph",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.crush_device_class": "",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.encrypted": "0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.objectstore": "bluestore",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.osd_id": "0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.type": "block",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.vdo": "0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.with_tpm": "0"
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             },
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "type": "block",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "vg_name": "ceph_vg0"
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:         }
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:     ],
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:     "1": [
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:         {
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "devices": [
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "/dev/loop4"
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             ],
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_name": "ceph_lv1",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_size": "21470642176",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "name": "ceph_lv1",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "tags": {
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.cluster_name": "ceph",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.crush_device_class": "",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.encrypted": "0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.objectstore": "bluestore",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.osd_id": "1",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.type": "block",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.vdo": "0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.with_tpm": "0"
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             },
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "type": "block",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "vg_name": "ceph_vg1"
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:         }
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:     ],
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:     "2": [
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:         {
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "devices": [
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "/dev/loop5"
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             ],
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_name": "ceph_lv2",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_size": "21470642176",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "name": "ceph_lv2",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "tags": {
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.cluster_name": "ceph",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.crush_device_class": "",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.encrypted": "0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.objectstore": "bluestore",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.osd_id": "2",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.type": "block",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.vdo": "0",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:                 "ceph.with_tpm": "0"
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             },
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "type": "block",
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:             "vg_name": "ceph_vg2"
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:         }
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]:     ]
Dec 04 10:23:32 compute-0 jovial_elgamal[134309]: }
Dec 04 10:23:32 compute-0 systemd[1]: libpod-885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95.scope: Deactivated successfully.
Dec 04 10:23:32 compute-0 podman[134293]: 2025-12-04 10:23:32.373089867 +0000 UTC m=+0.438517523 container died 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:23:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689-merged.mount: Deactivated successfully.
Dec 04 10:23:32 compute-0 podman[134293]: 2025-12-04 10:23:32.435222046 +0000 UTC m=+0.500649702 container remove 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:23:32 compute-0 systemd[1]: libpod-conmon-885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95.scope: Deactivated successfully.
Dec 04 10:23:32 compute-0 sudo[134216]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:32 compute-0 sudo[134331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:23:32 compute-0 sudo[134331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:23:32 compute-0 sudo[134331]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:32 compute-0 sudo[134356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:23:32 compute-0 sudo[134356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:23:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:32 compute-0 podman[134394]: 2025-12-04 10:23:32.895456139 +0000 UTC m=+0.038406052 container create 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec 04 10:23:32 compute-0 systemd[1]: Started libpod-conmon-9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750.scope.
Dec 04 10:23:32 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:23:32 compute-0 podman[134394]: 2025-12-04 10:23:32.964919644 +0000 UTC m=+0.107869577 container init 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:23:32 compute-0 podman[134394]: 2025-12-04 10:23:32.969955044 +0000 UTC m=+0.112904957 container start 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:23:32 compute-0 podman[134394]: 2025-12-04 10:23:32.972779722 +0000 UTC m=+0.115729635 container attach 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:23:32 compute-0 blissful_mccarthy[134410]: 167 167
Dec 04 10:23:32 compute-0 systemd[1]: libpod-9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750.scope: Deactivated successfully.
Dec 04 10:23:32 compute-0 podman[134394]: 2025-12-04 10:23:32.878670006 +0000 UTC m=+0.021619919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:23:32 compute-0 podman[134394]: 2025-12-04 10:23:32.974480332 +0000 UTC m=+0.117430245 container died 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 04 10:23:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0397515e6a037f9a11cb0dd02e85b6cc2b575b6076bfdf5f3e46e4cfbc3a6ba-merged.mount: Deactivated successfully.
Dec 04 10:23:33 compute-0 podman[134394]: 2025-12-04 10:23:33.012016472 +0000 UTC m=+0.154966385 container remove 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 04 10:23:33 compute-0 systemd[1]: libpod-conmon-9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750.scope: Deactivated successfully.
Dec 04 10:23:33 compute-0 podman[134434]: 2025-12-04 10:23:33.160851741 +0000 UTC m=+0.044772604 container create 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:23:33 compute-0 systemd[1]: Started libpod-conmon-2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3.scope.
Dec 04 10:23:33 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:23:33 compute-0 podman[134434]: 2025-12-04 10:23:33.140996254 +0000 UTC m=+0.024917137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:23:33 compute-0 podman[134434]: 2025-12-04 10:23:33.237675402 +0000 UTC m=+0.121596425 container init 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 04 10:23:33 compute-0 podman[134434]: 2025-12-04 10:23:33.245294375 +0000 UTC m=+0.129215228 container start 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:23:33 compute-0 podman[134434]: 2025-12-04 10:23:33.249236069 +0000 UTC m=+0.133157052 container attach 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:23:33 compute-0 sshd-session[134456]: Accepted publickey for zuul from 192.168.122.30 port 47556 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:23:33 compute-0 systemd-logind[798]: New session 45 of user zuul.
Dec 04 10:23:33 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 04 10:23:33 compute-0 sshd-session[134456]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:23:33 compute-0 lvm[134631]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:23:33 compute-0 lvm[134631]: VG ceph_vg0 finished
Dec 04 10:23:33 compute-0 lvm[134637]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:23:33 compute-0 lvm[134637]: VG ceph_vg1 finished
Dec 04 10:23:34 compute-0 lvm[134658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:23:34 compute-0 lvm[134658]: VG ceph_vg2 finished
Dec 04 10:23:34 compute-0 lvm[134668]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:23:34 compute-0 lvm[134668]: VG ceph_vg1 finished
Dec 04 10:23:34 compute-0 ceph-mon[75358]: pgmap v381: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:34 compute-0 sudo[134688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yksfyitpldlgyewlkfvmywqicmodwyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843813.5872147-22-149469099258720/AnsiballZ_file.py'
Dec 04 10:23:34 compute-0 sudo[134688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:34 compute-0 lvm[134691]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:23:34 compute-0 lvm[134691]: VG ceph_vg1 finished
Dec 04 10:23:34 compute-0 practical_hertz[134451]: {}
Dec 04 10:23:34 compute-0 podman[134434]: 2025-12-04 10:23:34.139556651 +0000 UTC m=+1.023477504 container died 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:23:34 compute-0 systemd[1]: libpod-2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3.scope: Deactivated successfully.
Dec 04 10:23:34 compute-0 systemd[1]: libpod-2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3.scope: Consumed 1.426s CPU time.
Dec 04 10:23:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f-merged.mount: Deactivated successfully.
Dec 04 10:23:34 compute-0 podman[134434]: 2025-12-04 10:23:34.186432275 +0000 UTC m=+1.070353128 container remove 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:23:34 compute-0 systemd[1]: libpod-conmon-2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3.scope: Deactivated successfully.
Dec 04 10:23:34 compute-0 sudo[134356]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:23:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:23:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:23:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:23:34 compute-0 python3.9[134692]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:34 compute-0 sudo[134688]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:34 compute-0 sudo[134707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:23:34 compute-0 sudo[134707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:23:34 compute-0 sudo[134707]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:34 compute-0 sudo[134881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omsiikwlsqkihpvdilmnoptypbblrfsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843814.4129927-34-197785888702082/AnsiballZ_stat.py'
Dec 04 10:23:34 compute-0 sudo[134881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:35 compute-0 python3.9[134883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:35 compute-0 sudo[134881]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:23:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:23:35 compute-0 ceph-mon[75358]: pgmap v382: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:35 compute-0 sudo[135004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdgbzslxkgopkleanzhxaygyyzwuctra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843814.4129927-34-197785888702082/AnsiballZ_copy.py'
Dec 04 10:23:35 compute-0 sudo[135004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:35 compute-0 python3.9[135006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843814.4129927-34-197785888702082/.source.conf _original_basename=ceph.conf follow=False checksum=743a744c283201ba2a628c2473976918c65bd541 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:35 compute-0 sudo[135004]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:36 compute-0 sudo[135156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntzktmkeligqkdbgmtwbytqcdqdfpnyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843815.8882682-34-280672451107229/AnsiballZ_stat.py'
Dec 04 10:23:36 compute-0 sudo[135156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:36 compute-0 python3.9[135158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:36 compute-0 sudo[135156]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:36 compute-0 sudo[135279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avtgbwzjgvcrvydesmwbofombwrhazad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843815.8882682-34-280672451107229/AnsiballZ_copy.py'
Dec 04 10:23:36 compute-0 sudo[135279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:36 compute-0 python3.9[135281]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843815.8882682-34-280672451107229/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=78fa63d8c69ed08876e15c6d423f4ac4e13914fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:36 compute-0 sudo[135279]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:36 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:23:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:23:37 compute-0 sshd-session[134469]: Connection closed by 192.168.122.30 port 47556
Dec 04 10:23:37 compute-0 sshd-session[134456]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:23:37 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 04 10:23:37 compute-0 systemd[1]: session-45.scope: Consumed 2.641s CPU time.
Dec 04 10:23:37 compute-0 systemd-logind[798]: Session 45 logged out. Waiting for processes to exit.
Dec 04 10:23:37 compute-0 systemd-logind[798]: Removed session 45.
Dec 04 10:23:37 compute-0 ceph-mon[75358]: pgmap v383: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:40 compute-0 ceph-mon[75358]: pgmap v384: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:42 compute-0 ceph-mon[75358]: pgmap v385: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:42 compute-0 sshd-session[135306]: Invalid user teste from 107.175.213.239 port 60572
Dec 04 10:23:42 compute-0 sshd-session[135306]: Received disconnect from 107.175.213.239 port 60572:11: Bye Bye [preauth]
Dec 04 10:23:42 compute-0 sshd-session[135306]: Disconnected from invalid user teste 107.175.213.239 port 60572 [preauth]
Dec 04 10:23:42 compute-0 sshd-session[135308]: Accepted publickey for zuul from 192.168.122.30 port 56372 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:23:42 compute-0 systemd-logind[798]: New session 46 of user zuul.
Dec 04 10:23:42 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 04 10:23:42 compute-0 sshd-session[135308]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:23:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:43 compute-0 ceph-mon[75358]: pgmap v386: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:43 compute-0 python3.9[135461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:23:44 compute-0 sudo[135615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlnxunbhhicnrrgrveobcqawurhdnfbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843824.1355655-34-42946210443243/AnsiballZ_file.py'
Dec 04 10:23:44 compute-0 sudo[135615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:44 compute-0 python3.9[135617]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:44 compute-0 sudo[135615]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:45 compute-0 sudo[135767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erqzgteotwlzikcllkfezfeptucdbkso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843824.8652954-34-238567000131369/AnsiballZ_file.py'
Dec 04 10:23:45 compute-0 sudo[135767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:45 compute-0 python3.9[135769]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:23:45 compute-0 sudo[135767]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:45 compute-0 ceph-mon[75358]: pgmap v387: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:46 compute-0 python3.9[135919]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:23:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:46 compute-0 sudo[136069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypasiouirzpdfaripkcjwvpfuzvmvehg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843826.442456-57-224375809145241/AnsiballZ_seboolean.py'
Dec 04 10:23:46 compute-0 sudo[136069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:47 compute-0 python3.9[136071]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 04 10:23:48 compute-0 ceph-mon[75358]: pgmap v388: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:48 compute-0 sudo[136069]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:48 compute-0 sudo[136226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aagszjiziovllaibnvppbwlokzsqqhot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843828.5962071-67-126082759380332/AnsiballZ_setup.py'
Dec 04 10:23:48 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 04 10:23:48 compute-0 sudo[136226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:49 compute-0 python3.9[136228]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:23:49 compute-0 sudo[136226]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:49 compute-0 sudo[136310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yizpmpoelzrvcjxvbnqjkvaovtawgcet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843828.5962071-67-126082759380332/AnsiballZ_dnf.py'
Dec 04 10:23:49 compute-0 sudo[136310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:50 compute-0 ceph-mon[75358]: pgmap v389: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:50 compute-0 python3.9[136312]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:23:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:51 compute-0 sudo[136310]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:52 compute-0 ceph-mon[75358]: pgmap v390: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:52 compute-0 sudo[136463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oirqwqnhiqlmtkkozksfndmowrrofnqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843831.7745783-79-55270228106439/AnsiballZ_systemd.py'
Dec 04 10:23:52 compute-0 sudo[136463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:52 compute-0 python3.9[136465]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 04 10:23:52 compute-0 sudo[136463]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:53 compute-0 sudo[136618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpvcsymohftgwiocwkwfmxzwjkrnewoh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764843832.9154568-87-189228415148660/AnsiballZ_edpm_nftables_snippet.py'
Dec 04 10:23:53 compute-0 sudo[136618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:53 compute-0 python3[136620]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 04 10:23:53 compute-0 sudo[136618]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:54 compute-0 ceph-mon[75358]: pgmap v391: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:54 compute-0 sudo[136770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iirgnuzyxfgnidzchlplprvmkfcwizan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843833.8409476-96-95489980960699/AnsiballZ_file.py'
Dec 04 10:23:54 compute-0 sudo[136770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:54 compute-0 python3.9[136772]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:54 compute-0 sudo[136770]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:54 compute-0 sudo[136922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjnlystfrrodvedgbdnwmphrtbivmdpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843834.4402437-104-152745341870731/AnsiballZ_stat.py'
Dec 04 10:23:54 compute-0 sudo[136922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:55 compute-0 python3.9[136924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:55 compute-0 sudo[136922]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:55 compute-0 sudo[137000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyusvuhpmcusuckodxrrccelocvnvfsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843834.4402437-104-152745341870731/AnsiballZ_file.py'
Dec 04 10:23:55 compute-0 sudo[137000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:55 compute-0 python3.9[137002]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:55 compute-0 sudo[137000]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:55 compute-0 sudo[137152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvvthpraohoosbmjvvatbknssbrfqwve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843835.679165-116-77518355491225/AnsiballZ_stat.py'
Dec 04 10:23:55 compute-0 sudo[137152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:56 compute-0 ceph-mon[75358]: pgmap v392: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:56 compute-0 python3.9[137154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:56 compute-0 sudo[137152]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:23:56 compute-0 sudo[137230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhqdfoyejhmjjfxzidfkfbxlacldqrgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843835.679165-116-77518355491225/AnsiballZ_file.py'
Dec 04 10:23:56 compute-0 sudo[137230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:56 compute-0 python3.9[137232]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.uzajw5vp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:56 compute-0 sudo[137230]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:56 compute-0 sudo[137382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rarycnpdgdsrbaixufhfcdnapqxojcrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843836.7485502-128-257888009712208/AnsiballZ_stat.py'
Dec 04 10:23:57 compute-0 sudo[137382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:57 compute-0 python3.9[137384]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:57 compute-0 sudo[137382]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:57 compute-0 sudo[137460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iajtmglrbslfzoooqhmewntokvytqywf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843836.7485502-128-257888009712208/AnsiballZ_file.py'
Dec 04 10:23:57 compute-0 sudo[137460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:57 compute-0 python3.9[137462]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:23:57 compute-0 sudo[137460]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:23:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:23:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:23:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:23:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:23:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:23:58 compute-0 ceph-mon[75358]: pgmap v393: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:58 compute-0 sudo[137613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wholyojsjamuvyjxpemxdoogjsdmhadt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843837.8231266-141-131093626277414/AnsiballZ_command.py'
Dec 04 10:23:58 compute-0 sudo[137613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:58 compute-0 python3.9[137615]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:23:58 compute-0 sudo[137613]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:23:59 compute-0 sudo[137767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulaxolxtmypttieuweipzqynvgtnrblo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764843838.6101635-149-255454155822884/AnsiballZ_edpm_nftables_from_files.py'
Dec 04 10:23:59 compute-0 sudo[137767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:59 compute-0 python3[137769]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 04 10:23:59 compute-0 sudo[137767]: pam_unix(sudo:session): session closed for user root
Dec 04 10:23:59 compute-0 sudo[137919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yesbwlzurgmxsxvurgsukiswfohwiioh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843839.3780198-157-202304835720289/AnsiballZ_stat.py'
Dec 04 10:23:59 compute-0 sudo[137919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:23:59 compute-0 python3.9[137921]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:23:59 compute-0 sudo[137919]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:00 compute-0 ceph-mon[75358]: pgmap v394: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:00 compute-0 sudo[138044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfyfvyduuvthkevhysfzmmeljabwhjpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843839.3780198-157-202304835720289/AnsiballZ_copy.py'
Dec 04 10:24:00 compute-0 sudo[138044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:00 compute-0 sshd-session[137463]: Invalid user test from 49.124.151.62 port 46920
Dec 04 10:24:00 compute-0 python3.9[138046]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843839.3780198-157-202304835720289/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:00 compute-0 sudo[138044]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:01 compute-0 sudo[138196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvgazioygvqnlphorixmhqpavmmrigno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843840.738866-172-209200007089401/AnsiballZ_stat.py'
Dec 04 10:24:01 compute-0 sudo[138196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:01 compute-0 sshd-session[137463]: Connection closed by invalid user test 49.124.151.62 port 46920 [preauth]
Dec 04 10:24:01 compute-0 python3.9[138198]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:01 compute-0 sudo[138196]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:01 compute-0 sudo[138321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kteigwmrpspxqkobccpsrqvllcnrmezl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843840.738866-172-209200007089401/AnsiballZ_copy.py'
Dec 04 10:24:01 compute-0 sudo[138321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:01 compute-0 python3.9[138323]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843840.738866-172-209200007089401/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:01 compute-0 sudo[138321]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:02 compute-0 ceph-mon[75358]: pgmap v395: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:02 compute-0 sudo[138473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brhngovshqsgvdxlavlctrametquttlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843841.9995663-187-220923969205040/AnsiballZ_stat.py'
Dec 04 10:24:02 compute-0 sudo[138473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:02 compute-0 python3.9[138475]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:02 compute-0 sudo[138473]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:02 compute-0 sudo[138598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-litvljzevlrgerpabhbhpjqpfgctzstj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843841.9995663-187-220923969205040/AnsiballZ_copy.py'
Dec 04 10:24:02 compute-0 sudo[138598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:03 compute-0 python3.9[138600]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843841.9995663-187-220923969205040/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:03 compute-0 sudo[138598]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:03 compute-0 sudo[138750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzpsutrxqqhagdpjlaakkzucjixvgars ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843843.208006-202-128697812620889/AnsiballZ_stat.py'
Dec 04 10:24:03 compute-0 sudo[138750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:03 compute-0 python3.9[138752]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:03 compute-0 sudo[138750]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:04 compute-0 sudo[138875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgxrdekygogyicgwxckpfbzwwnajtbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843843.208006-202-128697812620889/AnsiballZ_copy.py'
Dec 04 10:24:04 compute-0 sudo[138875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:04 compute-0 ceph-mon[75358]: pgmap v396: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:04 compute-0 python3.9[138877]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843843.208006-202-128697812620889/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:04 compute-0 sudo[138875]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:04 compute-0 sudo[139028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxsieyhluzpephmvabsgpwztrhljdyrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843844.4228766-217-209509027150972/AnsiballZ_stat.py'
Dec 04 10:24:04 compute-0 sudo[139028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:04 compute-0 python3.9[139030]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:04 compute-0 sudo[139028]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:05 compute-0 sudo[139153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzxcugyrpaapuctzblvegzhlpgkujzmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843844.4228766-217-209509027150972/AnsiballZ_copy.py'
Dec 04 10:24:05 compute-0 sudo[139153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:05 compute-0 python3.9[139155]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843844.4228766-217-209509027150972/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:05 compute-0 sudo[139153]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:05 compute-0 sudo[139305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jblvwvprhjmbqqxbmqzqcrjpewzcmmla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843845.65071-232-22970957220856/AnsiballZ_file.py'
Dec 04 10:24:05 compute-0 sudo[139305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:06 compute-0 python3.9[139307]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:06 compute-0 ceph-mon[75358]: pgmap v397: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:06 compute-0 sudo[139305]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:24:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2048 writes, 9132 keys, 2048 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2048 writes, 2048 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2048 writes, 9132 keys, 2048 commit groups, 1.0 writes per commit group, ingest: 11.64 MB, 0.02 MB/s
                                           Interval WAL: 2048 writes, 2048 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     92.1      0.09              0.02         3    0.031       0      0       0.0       0.0
                                             L6      1/0    6.56 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     88.8     78.1      0.18              0.04         2    0.088    7244    737       0.0       0.0
                                            Sum      1/0    6.56 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     58.4     82.9      0.27              0.06         5    0.053    7244    737       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     59.2     84.0      0.26              0.06         4    0.066    7244    737       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     88.8     78.1      0.18              0.04         2    0.088    7244    737       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.6      0.09              0.02         2    0.044       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56349f89b8d0#2 capacity: 308.00 MB usage: 709.38 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(39,621.84 KB,0.197165%) FilterBlock(6,28.61 KB,0.00907105%) IndexBlock(6,58.92 KB,0.0186821%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 04 10:24:06 compute-0 sudo[139457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glqfcbliwpaoazuhthfjbulmaudjcgoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843846.2551496-240-144579098693274/AnsiballZ_command.py'
Dec 04 10:24:06 compute-0 sudo[139457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:06 compute-0 python3.9[139459]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:24:06 compute-0 sudo[139457]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:07 compute-0 sudo[139612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdtubvxtboklrfmzadyepmjmfdoioty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843846.910952-248-225726130873438/AnsiballZ_blockinfile.py'
Dec 04 10:24:07 compute-0 sudo[139612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:07 compute-0 python3.9[139614]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:07 compute-0 sudo[139612]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:08 compute-0 sudo[139764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcyakjfydhfbsmemtcmnmnrlekyfkmlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843847.8123455-257-102296846164468/AnsiballZ_command.py'
Dec 04 10:24:08 compute-0 sudo[139764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:08 compute-0 ceph-mon[75358]: pgmap v398: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:08 compute-0 python3.9[139766]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:24:08 compute-0 sudo[139764]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:08 compute-0 sudo[139919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdrvmikkyyqnqfcyxswvoulazvdbhjaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843848.5769086-265-141206178977636/AnsiballZ_stat.py'
Dec 04 10:24:08 compute-0 sudo[139919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:09 compute-0 python3.9[139921]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:24:09 compute-0 sudo[139919]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:09 compute-0 ceph-mon[75358]: pgmap v399: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:09 compute-0 sudo[140073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snidtntgyoafoptjoacgxankmbshckzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843849.3032265-273-247522550399278/AnsiballZ_command.py'
Dec 04 10:24:09 compute-0 sudo[140073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:09 compute-0 sshd-session[139768]: Invalid user free from 103.149.86.230 port 60776
Dec 04 10:24:09 compute-0 python3.9[140075]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:24:09 compute-0 sudo[140073]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:09 compute-0 sshd-session[139768]: Received disconnect from 103.149.86.230 port 60776:11: Bye Bye [preauth]
Dec 04 10:24:09 compute-0 sshd-session[139768]: Disconnected from invalid user free 103.149.86.230 port 60776 [preauth]
Dec 04 10:24:10 compute-0 sudo[140228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcgnjyhaxxofwkstgxbhymtmebzxwsyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843850.014417-281-235891868830284/AnsiballZ_file.py'
Dec 04 10:24:10 compute-0 sudo[140228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:10 compute-0 python3.9[140230]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:10 compute-0 sudo[140228]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:11 compute-0 python3.9[140380]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:24:12 compute-0 ceph-mon[75358]: pgmap v400: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:12 compute-0 sudo[140531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxiziinuowqrkdqcuyjzoitignohmspl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843852.3413985-321-20282796610776/AnsiballZ_command.py'
Dec 04 10:24:12 compute-0 sudo[140531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:12 compute-0 python3.9[140533]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:24:12 compute-0 ovs-vsctl[140534]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 04 10:24:12 compute-0 sudo[140531]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:13 compute-0 sudo[140684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhwlnrzgxqgudmdkyrdteqdpxufiondc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843853.0326157-330-117272070028216/AnsiballZ_command.py'
Dec 04 10:24:13 compute-0 sudo[140684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:13 compute-0 ceph-mon[75358]: pgmap v401: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:13 compute-0 python3.9[140686]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:24:13 compute-0 sudo[140684]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:13 compute-0 sudo[140839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvyvgvdxqrcivjsdsdqnqelhpywrbvay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843853.6886315-338-98692347494246/AnsiballZ_command.py'
Dec 04 10:24:13 compute-0 sudo[140839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:14 compute-0 python3.9[140841]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:24:14 compute-0 ovs-vsctl[140842]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 04 10:24:14 compute-0 sudo[140839]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:14 compute-0 python3.9[140992]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:24:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:15 compute-0 sudo[141144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-behkavjxldstzaqtgyzibqiqhemkucdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843854.9512348-355-145161069735658/AnsiballZ_file.py'
Dec 04 10:24:15 compute-0 sudo[141144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:15 compute-0 python3.9[141146]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:24:15 compute-0 sudo[141144]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:15 compute-0 sudo[141296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmjfwphpdeivrteqrkyedfkhiohebebc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843855.5845006-363-124039184207349/AnsiballZ_stat.py'
Dec 04 10:24:15 compute-0 sudo[141296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:15 compute-0 ceph-mon[75358]: pgmap v402: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:16 compute-0 python3.9[141298]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:16 compute-0 sudo[141296]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:16 compute-0 sudo[141374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwxtvbfpgmzudsstdfijnlqicoxvngce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843855.5845006-363-124039184207349/AnsiballZ_file.py'
Dec 04 10:24:16 compute-0 sudo[141374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:16 compute-0 python3.9[141376]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:24:16 compute-0 sudo[141374]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:16 compute-0 sudo[141526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbwjsdtcuifzjgrhjcvnzerhbmfghpwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843856.616587-363-1626377196025/AnsiballZ_stat.py'
Dec 04 10:24:16 compute-0 sudo[141526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:17 compute-0 python3.9[141528]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:17 compute-0 sudo[141526]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:17 compute-0 sudo[141604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iulrtgvivfytquxjfujuuujwjazuqono ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843856.616587-363-1626377196025/AnsiballZ_file.py'
Dec 04 10:24:17 compute-0 sudo[141604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:17 compute-0 python3.9[141606]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:24:17 compute-0 sudo[141604]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:17 compute-0 sudo[141756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guajojerhbozbpezvtnnmocpsxmcctrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843857.6202867-386-232747755811423/AnsiballZ_file.py'
Dec 04 10:24:17 compute-0 sudo[141756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:17 compute-0 ceph-mon[75358]: pgmap v403: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:18 compute-0 python3.9[141758]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:18 compute-0 sudo[141756]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:18 compute-0 sudo[141908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaktknuywauwjxtagihnhfsmwdjjfblh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843858.2232206-394-66126888713546/AnsiballZ_stat.py'
Dec 04 10:24:18 compute-0 sudo[141908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:18 compute-0 python3.9[141910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:18 compute-0 sudo[141908]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:19 compute-0 sudo[141986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqpxhcxngyvftirxsdjkkzhrzwafbuip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843858.2232206-394-66126888713546/AnsiballZ_file.py'
Dec 04 10:24:19 compute-0 sudo[141986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:19 compute-0 python3.9[141988]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:19 compute-0 sudo[141986]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:19 compute-0 sudo[142138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcgxkiejkccgsxtvbiiooyfnmsvuywpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843859.4280446-406-222103668233696/AnsiballZ_stat.py'
Dec 04 10:24:19 compute-0 sudo[142138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:19 compute-0 python3.9[142140]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:19 compute-0 ceph-mon[75358]: pgmap v404: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:20 compute-0 sudo[142138]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:20 compute-0 sudo[142216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdcgxeoagfevdgqcgnwgvbshtpshlwok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843859.4280446-406-222103668233696/AnsiballZ_file.py'
Dec 04 10:24:20 compute-0 sudo[142216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:20 compute-0 python3.9[142218]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:20 compute-0 sudo[142216]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:21 compute-0 sudo[142368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wienrvxqwzphfeyptxmuafhxqacldmyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843860.635002-418-95985127636459/AnsiballZ_systemd.py'
Dec 04 10:24:21 compute-0 sudo[142368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:21 compute-0 python3.9[142370]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:24:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:21 compute-0 systemd[1]: Reloading.
Dec 04 10:24:21 compute-0 systemd-rc-local-generator[142394]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:24:21 compute-0 systemd-sysv-generator[142398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:24:21 compute-0 sudo[142368]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:21 compute-0 sshd-session[142406]: Invalid user master from 74.249.218.27 port 34898
Dec 04 10:24:21 compute-0 sshd-session[142406]: Received disconnect from 74.249.218.27 port 34898:11: Bye Bye [preauth]
Dec 04 10:24:21 compute-0 sshd-session[142406]: Disconnected from invalid user master 74.249.218.27 port 34898 [preauth]
Dec 04 10:24:21 compute-0 ceph-mon[75358]: pgmap v405: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:22 compute-0 sudo[142559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eycsibsqequrcnbhdocqdwyuvzurxjxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843861.9290702-426-257965253024296/AnsiballZ_stat.py'
Dec 04 10:24:22 compute-0 sudo[142559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:22 compute-0 python3.9[142561]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:22 compute-0 sudo[142559]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:22 compute-0 sudo[142637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffltiihdxbgkugwfwungojkkiihpqagp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843861.9290702-426-257965253024296/AnsiballZ_file.py'
Dec 04 10:24:22 compute-0 sudo[142637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:22 compute-0 python3.9[142639]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:22 compute-0 sudo[142637]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:23 compute-0 sudo[142789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqzhpqqfigsqksvvtjdspbmzsgqoyruu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843863.0710437-438-202859042954888/AnsiballZ_stat.py'
Dec 04 10:24:23 compute-0 sudo[142789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:23 compute-0 python3.9[142791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:23 compute-0 sudo[142789]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:23 compute-0 sudo[142867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wodrezhmcufmnjsfjzvezaijyvojnluk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843863.0710437-438-202859042954888/AnsiballZ_file.py'
Dec 04 10:24:23 compute-0 sudo[142867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:23 compute-0 ceph-mon[75358]: pgmap v406: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:24 compute-0 python3.9[142869]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:24 compute-0 sudo[142867]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:24 compute-0 sudo[143019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkkbttjjptszcdrjajwsikggtxoopxqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843864.2514644-450-78143250640688/AnsiballZ_systemd.py'
Dec 04 10:24:24 compute-0 sudo[143019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:24 compute-0 python3.9[143021]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:24:24 compute-0 systemd[1]: Reloading.
Dec 04 10:24:24 compute-0 systemd-sysv-generator[143055]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:24:24 compute-0 systemd-rc-local-generator[143051]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:24:25 compute-0 systemd[1]: Starting Create netns directory...
Dec 04 10:24:25 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 04 10:24:25 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 04 10:24:25 compute-0 systemd[1]: Finished Create netns directory.
Dec 04 10:24:25 compute-0 sudo[143019]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:25 compute-0 sudo[143214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojwcubcfupivatzcceorahaaqwupfkgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843865.572499-460-26990297852747/AnsiballZ_file.py'
Dec 04 10:24:25 compute-0 sudo[143214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:26 compute-0 ceph-mon[75358]: pgmap v407: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:26 compute-0 python3.9[143216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:24:26 compute-0 sudo[143214]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:26 compute-0 sudo[143366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgyzmbpttzumndaojnfooshhfnxvncro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843866.3751552-468-216481039494650/AnsiballZ_stat.py'
Dec 04 10:24:26 compute-0 sudo[143366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:24:26
Dec 04 10:24:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:24:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:24:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'volumes', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Dec 04 10:24:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:24:26 compute-0 python3.9[143368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:26 compute-0 sudo[143366]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:27 compute-0 sudo[143489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhbbjphhnajywldvpfowdbvocyjzqeir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843866.3751552-468-216481039494650/AnsiballZ_copy.py'
Dec 04 10:24:27 compute-0 sudo[143489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:27 compute-0 python3.9[143491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843866.3751552-468-216481039494650/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:24:27 compute-0 sudo[143489]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:24:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:24:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:24:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:24:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:24:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:24:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:24:28 compute-0 ceph-mon[75358]: pgmap v408: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:28 compute-0 sudo[143641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kogfgvjbtxkyrkbijsklwppcmqkvlevw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843867.8153806-485-116962519636573/AnsiballZ_file.py'
Dec 04 10:24:28 compute-0 sudo[143641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:28 compute-0 python3.9[143643]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:24:28 compute-0 sudo[143641]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:28 compute-0 sudo[143795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsxsfocpddrsapksqbnxtmdwrmylwlgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843868.5357344-493-8178310881122/AnsiballZ_stat.py'
Dec 04 10:24:28 compute-0 sudo[143795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:28 compute-0 python3.9[143797]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:24:29 compute-0 sudo[143795]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:29 compute-0 sudo[143918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eawkhvsrsnyjbhtexwbkqrklovvfpzli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843868.5357344-493-8178310881122/AnsiballZ_copy.py'
Dec 04 10:24:29 compute-0 sudo[143918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:29 compute-0 python3.9[143920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843868.5357344-493-8178310881122/.source.json _original_basename=.mr03u9_e follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:29 compute-0 sudo[143918]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:29 compute-0 sshd-session[143658]: Invalid user intell from 103.179.218.243 port 41798
Dec 04 10:24:29 compute-0 sshd-session[143658]: Received disconnect from 103.179.218.243 port 41798:11: Bye Bye [preauth]
Dec 04 10:24:29 compute-0 sshd-session[143658]: Disconnected from invalid user intell 103.179.218.243 port 41798 [preauth]
Dec 04 10:24:29 compute-0 sudo[144070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecsbrwuuelyydiusfufgmimiwlmyntqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843869.656561-508-143928619416055/AnsiballZ_file.py'
Dec 04 10:24:29 compute-0 sudo[144070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:30 compute-0 python3.9[144072]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:30 compute-0 sudo[144070]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:30 compute-0 sudo[144222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfkelwubnhllyewahxcejaypyvrbokct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843870.3152928-516-133619005864640/AnsiballZ_stat.py'
Dec 04 10:24:30 compute-0 ceph-mon[75358]: pgmap v409: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:30 compute-0 sudo[144222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:30 compute-0 sudo[144222]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:31 compute-0 sudo[144345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbgnspuushesuchihhcxscnnltxqnwee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843870.3152928-516-133619005864640/AnsiballZ_copy.py'
Dec 04 10:24:31 compute-0 sudo[144345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:31 compute-0 sudo[144345]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:31 compute-0 ceph-mon[75358]: pgmap v410: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:32 compute-0 sudo[144497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcbgtawfmjfsfffjnwexuwgptfgocfjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843871.5771303-533-138326477251945/AnsiballZ_container_config_data.py'
Dec 04 10:24:32 compute-0 sudo[144497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:32 compute-0 python3.9[144499]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 04 10:24:32 compute-0 sudo[144497]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:32 compute-0 sudo[144649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycuarjrbdzyjpijnxbhbrcguvpiqfwex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843872.5011864-542-115164801089474/AnsiballZ_container_config_hash.py'
Dec 04 10:24:32 compute-0 sudo[144649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:33 compute-0 python3.9[144651]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 04 10:24:33 compute-0 sudo[144649]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:33 compute-0 sudo[144801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiffwelufqmxbjbqlvjzrkxshfwffdvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843873.3388767-551-169328799053902/AnsiballZ_podman_container_info.py'
Dec 04 10:24:33 compute-0 sudo[144801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:33 compute-0 python3.9[144803]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 04 10:24:33 compute-0 ceph-mon[75358]: pgmap v411: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:34 compute-0 sudo[144801]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:34 compute-0 sudo[144852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:24:34 compute-0 sudo[144852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:24:34 compute-0 sudo[144852]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:34 compute-0 sudo[144877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:24:34 compute-0 sudo[144877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:24:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:34 compute-0 sudo[144877]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:24:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:24:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:24:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:24:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:24:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:24:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:24:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:24:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.977263) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843874977306, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 795, "num_deletes": 251, "total_data_size": 1056740, "memory_usage": 1071376, "flush_reason": "Manual Compaction"}
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843874984582, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1047298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8959, "largest_seqno": 9753, "table_properties": {"data_size": 1043224, "index_size": 1790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8637, "raw_average_key_size": 18, "raw_value_size": 1035130, "raw_average_value_size": 2235, "num_data_blocks": 83, "num_entries": 463, "num_filter_entries": 463, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843807, "oldest_key_time": 1764843807, "file_creation_time": 1764843874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 7363 microseconds, and 3689 cpu microseconds.
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.984624) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1047298 bytes OK
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.984651) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.985875) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.985889) EVENT_LOG_v1 {"time_micros": 1764843874985884, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.985909) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1052748, prev total WAL file size 1052748, number of live WAL files 2.
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.986432) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1022KB)], [23(6722KB)]
Dec 04 10:24:34 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843874986475, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7930809, "oldest_snapshot_seqno": -1}
Dec 04 10:24:35 compute-0 sudo[144998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:24:35 compute-0 sudo[144998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:24:35 compute-0 sudo[144998]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3346 keys, 6230652 bytes, temperature: kUnknown
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843875024219, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6230652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6206440, "index_size": 14759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 81192, "raw_average_key_size": 24, "raw_value_size": 6144070, "raw_average_value_size": 1836, "num_data_blocks": 644, "num_entries": 3346, "num_filter_entries": 3346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764843874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.024433) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6230652 bytes
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.026077) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.7 rd, 164.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.6 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(13.5) write-amplify(5.9) OK, records in: 3860, records dropped: 514 output_compression: NoCompression
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.026114) EVENT_LOG_v1 {"time_micros": 1764843875026087, "job": 8, "event": "compaction_finished", "compaction_time_micros": 37820, "compaction_time_cpu_micros": 15519, "output_level": 6, "num_output_files": 1, "total_output_size": 6230652, "num_input_records": 3860, "num_output_records": 3346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843875026364, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843875027471, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.986389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:24:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:24:35 compute-0 sudo[145040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:24:35 compute-0 sudo[145040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:24:35 compute-0 sudo[145108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybpkmnnqkpjwkihqvetlyaeckwdcrjzz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764843874.6388724-564-203134219099649/AnsiballZ_edpm_container_manage.py'
Dec 04 10:24:35 compute-0 sudo[145108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:35 compute-0 podman[145124]: 2025-12-04 10:24:35.322263446 +0000 UTC m=+0.040039669 container create e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:24:35 compute-0 systemd[1]: Started libpod-conmon-e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293.scope.
Dec 04 10:24:35 compute-0 python3[145110]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 04 10:24:35 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:24:35 compute-0 podman[145124]: 2025-12-04 10:24:35.303069785 +0000 UTC m=+0.020846038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:24:35 compute-0 podman[145124]: 2025-12-04 10:24:35.403979967 +0000 UTC m=+0.121756240 container init e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:24:35 compute-0 podman[145124]: 2025-12-04 10:24:35.411571257 +0000 UTC m=+0.129347490 container start e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:24:35 compute-0 podman[145124]: 2025-12-04 10:24:35.415570828 +0000 UTC m=+0.133347071 container attach e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:24:35 compute-0 pensive_austin[145140]: 167 167
Dec 04 10:24:35 compute-0 systemd[1]: libpod-e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293.scope: Deactivated successfully.
Dec 04 10:24:35 compute-0 podman[145124]: 2025-12-04 10:24:35.418652553 +0000 UTC m=+0.136428806 container died e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:24:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b170f13be015f36d6b66baea7c47217b5be1ed9ba9a4927987c11524e371ec6-merged.mount: Deactivated successfully.
Dec 04 10:24:35 compute-0 podman[145124]: 2025-12-04 10:24:35.462976049 +0000 UTC m=+0.180752302 container remove e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 04 10:24:35 compute-0 systemd[1]: libpod-conmon-e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293.scope: Deactivated successfully.
Dec 04 10:24:35 compute-0 podman[145187]: 2025-12-04 10:24:35.634875476 +0000 UTC m=+0.040370128 container create 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:24:35 compute-0 systemd[1]: Started libpod-conmon-21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead.scope.
Dec 04 10:24:35 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:35 compute-0 podman[145187]: 2025-12-04 10:24:35.619716176 +0000 UTC m=+0.025210828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:24:35 compute-0 podman[145187]: 2025-12-04 10:24:35.723999722 +0000 UTC m=+0.129494404 container init 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:24:35 compute-0 podman[145187]: 2025-12-04 10:24:35.731428577 +0000 UTC m=+0.136923229 container start 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:24:35 compute-0 podman[145187]: 2025-12-04 10:24:35.734901104 +0000 UTC m=+0.140395756 container attach 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:24:35 compute-0 ceph-mon[75358]: pgmap v412: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:36 compute-0 sweet_swartz[145202]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:24:36 compute-0 sweet_swartz[145202]: --> All data devices are unavailable
Dec 04 10:24:36 compute-0 systemd[1]: libpod-21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead.scope: Deactivated successfully.
Dec 04 10:24:36 compute-0 podman[145241]: 2025-12-04 10:24:36.27362964 +0000 UTC m=+0.031107882 container died 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:24:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af-merged.mount: Deactivated successfully.
Dec 04 10:24:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:36 compute-0 podman[145241]: 2025-12-04 10:24:36.315382655 +0000 UTC m=+0.072860877 container remove 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:24:36 compute-0 systemd[1]: libpod-conmon-21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead.scope: Deactivated successfully.
Dec 04 10:24:36 compute-0 sudo[145040]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:36 compute-0 sudo[145255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:24:36 compute-0 sudo[145255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:24:36 compute-0 sudo[145255]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:36 compute-0 sudo[145280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:24:36 compute-0 sudo[145280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:24:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:36 compute-0 podman[145316]: 2025-12-04 10:24:36.894031918 +0000 UTC m=+0.023717608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:24:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:24:37 compute-0 podman[145316]: 2025-12-04 10:24:37.263466729 +0000 UTC m=+0.393152389 container create 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:24:37 compute-0 ceph-mon[75358]: pgmap v413: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:37 compute-0 systemd[1]: Started libpod-conmon-177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca.scope.
Dec 04 10:24:37 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:24:37 compute-0 podman[145316]: 2025-12-04 10:24:37.532534555 +0000 UTC m=+0.662220225 container init 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:24:37 compute-0 podman[145316]: 2025-12-04 10:24:37.541133043 +0000 UTC m=+0.670818713 container start 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:24:37 compute-0 pensive_shockley[145342]: 167 167
Dec 04 10:24:37 compute-0 systemd[1]: libpod-177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca.scope: Deactivated successfully.
Dec 04 10:24:37 compute-0 podman[145316]: 2025-12-04 10:24:37.548323581 +0000 UTC m=+0.678009241 container attach 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:24:37 compute-0 podman[145316]: 2025-12-04 10:24:37.549043261 +0000 UTC m=+0.678728911 container died 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:24:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:40 compute-0 ceph-mon[75358]: pgmap v414: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-720dc0ce7c3cdb0bac0a1fe2041e5e8e54c0e91df6c89140418f81c94342ba4e-merged.mount: Deactivated successfully.
Dec 04 10:24:41 compute-0 ceph-mon[75358]: pgmap v415: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:41 compute-0 podman[145316]: 2025-12-04 10:24:41.762267962 +0000 UTC m=+4.891953622 container remove 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 04 10:24:41 compute-0 systemd[1]: libpod-conmon-177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca.scope: Deactivated successfully.
Dec 04 10:24:41 compute-0 podman[145157]: 2025-12-04 10:24:41.811739341 +0000 UTC m=+6.381793686 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 04 10:24:41 compute-0 podman[145443]: 2025-12-04 10:24:41.958298216 +0000 UTC m=+0.055523647 container create c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:24:41 compute-0 podman[145445]: 2025-12-04 10:24:41.976815159 +0000 UTC m=+0.062981644 container create 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 04 10:24:41 compute-0 podman[145445]: 2025-12-04 10:24:41.944471484 +0000 UTC m=+0.030638019 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 04 10:24:41 compute-0 python3[145110]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 04 10:24:42 compute-0 systemd[1]: Started libpod-conmon-c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64.scope.
Dec 04 10:24:42 compute-0 podman[145443]: 2025-12-04 10:24:41.928012888 +0000 UTC m=+0.025238369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:24:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:42 compute-0 podman[145443]: 2025-12-04 10:24:42.057282135 +0000 UTC m=+0.154507556 container init c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Dec 04 10:24:42 compute-0 podman[145443]: 2025-12-04 10:24:42.06756395 +0000 UTC m=+0.164789361 container start c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:24:42 compute-0 podman[145443]: 2025-12-04 10:24:42.070625684 +0000 UTC m=+0.167851095 container attach c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:24:42 compute-0 sudo[145108]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]: {
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:     "0": [
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:         {
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "devices": [
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "/dev/loop3"
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             ],
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_name": "ceph_lv0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_size": "21470642176",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "name": "ceph_lv0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "tags": {
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.cluster_name": "ceph",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.crush_device_class": "",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.encrypted": "0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.objectstore": "bluestore",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.osd_id": "0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.type": "block",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.vdo": "0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.with_tpm": "0"
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             },
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "type": "block",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "vg_name": "ceph_vg0"
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:         }
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:     ],
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:     "1": [
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:         {
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "devices": [
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "/dev/loop4"
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             ],
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_name": "ceph_lv1",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_size": "21470642176",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "name": "ceph_lv1",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "tags": {
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.cluster_name": "ceph",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.crush_device_class": "",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.encrypted": "0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.objectstore": "bluestore",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.osd_id": "1",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.type": "block",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.vdo": "0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.with_tpm": "0"
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             },
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "type": "block",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "vg_name": "ceph_vg1"
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:         }
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:     ],
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:     "2": [
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:         {
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "devices": [
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "/dev/loop5"
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             ],
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_name": "ceph_lv2",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_size": "21470642176",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "name": "ceph_lv2",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "tags": {
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.cluster_name": "ceph",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.crush_device_class": "",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.encrypted": "0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.objectstore": "bluestore",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.osd_id": "2",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.type": "block",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.vdo": "0",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:                 "ceph.with_tpm": "0"
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             },
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "type": "block",
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:             "vg_name": "ceph_vg2"
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:         }
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]:     ]
Dec 04 10:24:42 compute-0 wizardly_lamarr[145478]: }
Dec 04 10:24:42 compute-0 systemd[1]: libpod-c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64.scope: Deactivated successfully.
Dec 04 10:24:42 compute-0 podman[145443]: 2025-12-04 10:24:42.424145406 +0000 UTC m=+0.521370837 container died c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2-merged.mount: Deactivated successfully.
Dec 04 10:24:42 compute-0 podman[145443]: 2025-12-04 10:24:42.471029264 +0000 UTC m=+0.568254685 container remove c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:24:42 compute-0 systemd[1]: libpod-conmon-c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64.scope: Deactivated successfully.
Dec 04 10:24:42 compute-0 sudo[145280]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:42 compute-0 sudo[145618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:24:42 compute-0 sudo[145618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:24:42 compute-0 sudo[145618]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:42 compute-0 sudo[145667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:24:42 compute-0 sudo[145667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:24:42 compute-0 sudo[145718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwstbbfkuvgksloizghgbwfqchksapkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843882.3233304-572-4131223252773/AnsiballZ_stat.py'
Dec 04 10:24:42 compute-0 sudo[145718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:42 compute-0 python3.9[145720]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:24:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:42 compute-0 sudo[145718]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:43 compute-0 podman[145736]: 2025-12-04 10:24:43.014931513 +0000 UTC m=+0.049757397 container create c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 04 10:24:43 compute-0 systemd[1]: Started libpod-conmon-c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909.scope.
Dec 04 10:24:43 compute-0 podman[145736]: 2025-12-04 10:24:42.997893682 +0000 UTC m=+0.032719566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:24:43 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:24:43 compute-0 podman[145736]: 2025-12-04 10:24:43.117853212 +0000 UTC m=+0.152679116 container init c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:24:43 compute-0 podman[145736]: 2025-12-04 10:24:43.126181642 +0000 UTC m=+0.161007536 container start c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:24:43 compute-0 unruffled_montalcini[145776]: 167 167
Dec 04 10:24:43 compute-0 podman[145736]: 2025-12-04 10:24:43.131327444 +0000 UTC m=+0.166153358 container attach c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:24:43 compute-0 systemd[1]: libpod-c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909.scope: Deactivated successfully.
Dec 04 10:24:43 compute-0 podman[145736]: 2025-12-04 10:24:43.133135054 +0000 UTC m=+0.167960948 container died c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-70cbf0c1439b02d228ea08d7b0ec5c911f51884f31523b683a3cc38f01db497b-merged.mount: Deactivated successfully.
Dec 04 10:24:43 compute-0 podman[145736]: 2025-12-04 10:24:43.186054488 +0000 UTC m=+0.220880382 container remove c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:24:43 compute-0 systemd[1]: libpod-conmon-c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909.scope: Deactivated successfully.
Dec 04 10:24:43 compute-0 podman[145877]: 2025-12-04 10:24:43.377261179 +0000 UTC m=+0.051697582 container create 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:24:43 compute-0 systemd[1]: Started libpod-conmon-64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040.scope.
Dec 04 10:24:43 compute-0 podman[145877]: 2025-12-04 10:24:43.352223746 +0000 UTC m=+0.026660199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:24:43 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:43 compute-0 podman[145877]: 2025-12-04 10:24:43.471768164 +0000 UTC m=+0.146204567 container init 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:24:43 compute-0 podman[145877]: 2025-12-04 10:24:43.481087312 +0000 UTC m=+0.155523715 container start 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:24:43 compute-0 podman[145877]: 2025-12-04 10:24:43.484336512 +0000 UTC m=+0.158772915 container attach 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec 04 10:24:43 compute-0 sudo[145949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmhwsfzyvjqstbbndifiyktdmjafbyxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843883.1847298-581-207361526485764/AnsiballZ_file.py'
Dec 04 10:24:43 compute-0 sudo[145949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:43 compute-0 python3.9[145951]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:43 compute-0 sudo[145949]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:43 compute-0 sudo[146049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlxisgdyovxqqjidzyiqtaqoqywigzgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843883.1847298-581-207361526485764/AnsiballZ_stat.py'
Dec 04 10:24:43 compute-0 sudo[146049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:43 compute-0 ceph-mon[75358]: pgmap v416: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:44 compute-0 python3.9[146054]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:24:44 compute-0 sudo[146049]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:44 compute-0 lvm[146125]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:24:44 compute-0 lvm[146125]: VG ceph_vg0 finished
Dec 04 10:24:44 compute-0 lvm[146126]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:24:44 compute-0 lvm[146126]: VG ceph_vg1 finished
Dec 04 10:24:44 compute-0 lvm[146133]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:24:44 compute-0 lvm[146133]: VG ceph_vg2 finished
Dec 04 10:24:44 compute-0 quirky_varahamihira[145918]: {}
Dec 04 10:24:44 compute-0 systemd[1]: libpod-64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040.scope: Deactivated successfully.
Dec 04 10:24:44 compute-0 systemd[1]: libpod-64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040.scope: Consumed 1.426s CPU time.
Dec 04 10:24:44 compute-0 podman[145877]: 2025-12-04 10:24:44.33683042 +0000 UTC m=+1.011266873 container died 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:24:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd-merged.mount: Deactivated successfully.
Dec 04 10:24:44 compute-0 podman[145877]: 2025-12-04 10:24:44.383537883 +0000 UTC m=+1.057974286 container remove 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:24:44 compute-0 systemd[1]: libpod-conmon-64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040.scope: Deactivated successfully.
Dec 04 10:24:44 compute-0 sudo[145667]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:24:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:24:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:24:44 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:24:44 compute-0 sudo[146212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:24:44 compute-0 sudo[146212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:24:44 compute-0 sudo[146212]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:44 compute-0 sudo[146293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyelfhzduflqlupmzlvenlmhjjilfjfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843884.1837006-581-123452624321607/AnsiballZ_copy.py'
Dec 04 10:24:44 compute-0 sudo[146293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:44 compute-0 python3.9[146295]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843884.1837006-581-123452624321607/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:24:44 compute-0 sudo[146293]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:45 compute-0 sudo[146369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clztjzkhdawtlhbmrnqgsxptpoptdnvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843884.1837006-581-123452624321607/AnsiballZ_systemd.py'
Dec 04 10:24:45 compute-0 sudo[146369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:45 compute-0 python3.9[146371]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 04 10:24:45 compute-0 systemd[1]: Reloading.
Dec 04 10:24:45 compute-0 systemd-rc-local-generator[146397]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:24:45 compute-0 systemd-sysv-generator[146400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:24:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:24:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:24:45 compute-0 ceph-mon[75358]: pgmap v417: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:45 compute-0 sudo[146369]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:45 compute-0 sudo[146481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnmqcjdljlkyphhjlhukguyapxyxnslk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843884.1837006-581-123452624321607/AnsiballZ_systemd.py'
Dec 04 10:24:45 compute-0 sudo[146481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:46 compute-0 python3.9[146483]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:24:46 compute-0 systemd[1]: Reloading.
Dec 04 10:24:46 compute-0 systemd-rc-local-generator[146510]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:24:46 compute-0 systemd-sysv-generator[146515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:24:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:47 compute-0 ceph-mon[75358]: pgmap v418: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:47 compute-0 systemd[1]: Starting ovn_controller container...
Dec 04 10:24:48 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb152d2b6b97a8dcfc4acfb3d286a54e858d1d391f4d3ca6b50f427eb3899b84/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 04 10:24:48 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06.
Dec 04 10:24:48 compute-0 podman[146523]: 2025-12-04 10:24:48.283951317 +0000 UTC m=+0.344000509 container init 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 04 10:24:48 compute-0 ovn_controller[146538]: + sudo -E kolla_set_configs
Dec 04 10:24:48 compute-0 podman[146523]: 2025-12-04 10:24:48.320224391 +0000 UTC m=+0.380273573 container start 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 04 10:24:48 compute-0 edpm-start-podman-container[146523]: ovn_controller
Dec 04 10:24:48 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 04 10:24:48 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 04 10:24:48 compute-0 edpm-start-podman-container[146522]: Creating additional drop-in dependency for "ovn_controller" (0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06)
Dec 04 10:24:48 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 04 10:24:48 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 04 10:24:48 compute-0 systemd[146577]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 04 10:24:48 compute-0 systemd[1]: Reloading.
Dec 04 10:24:48 compute-0 podman[146545]: 2025-12-04 10:24:48.439374568 +0000 UTC m=+0.095373270 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec 04 10:24:48 compute-0 systemd-sysv-generator[146624]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:24:48 compute-0 systemd-rc-local-generator[146621]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:24:48 compute-0 systemd[146577]: Queued start job for default target Main User Target.
Dec 04 10:24:48 compute-0 systemd[146577]: Created slice User Application Slice.
Dec 04 10:24:48 compute-0 systemd[146577]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 04 10:24:48 compute-0 systemd[146577]: Started Daily Cleanup of User's Temporary Directories.
Dec 04 10:24:48 compute-0 systemd[146577]: Reached target Paths.
Dec 04 10:24:48 compute-0 systemd[146577]: Reached target Timers.
Dec 04 10:24:48 compute-0 systemd[146577]: Starting D-Bus User Message Bus Socket...
Dec 04 10:24:48 compute-0 systemd[146577]: Starting Create User's Volatile Files and Directories...
Dec 04 10:24:48 compute-0 systemd[146577]: Listening on D-Bus User Message Bus Socket.
Dec 04 10:24:48 compute-0 systemd[146577]: Reached target Sockets.
Dec 04 10:24:48 compute-0 systemd[146577]: Finished Create User's Volatile Files and Directories.
Dec 04 10:24:48 compute-0 systemd[146577]: Reached target Basic System.
Dec 04 10:24:48 compute-0 systemd[146577]: Reached target Main User Target.
Dec 04 10:24:48 compute-0 systemd[146577]: Startup finished in 165ms.
Dec 04 10:24:48 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 04 10:24:48 compute-0 systemd[1]: Started ovn_controller container.
Dec 04 10:24:48 compute-0 systemd[1]: 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06-3769d24cd29046b4.service: Main process exited, code=exited, status=1/FAILURE
Dec 04 10:24:48 compute-0 systemd[1]: 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06-3769d24cd29046b4.service: Failed with result 'exit-code'.
Dec 04 10:24:48 compute-0 systemd[1]: Started Session c1 of User root.
Dec 04 10:24:48 compute-0 sudo[146481]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:48 compute-0 ovn_controller[146538]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 04 10:24:48 compute-0 ovn_controller[146538]: INFO:__main__:Validating config file
Dec 04 10:24:48 compute-0 ovn_controller[146538]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 04 10:24:48 compute-0 ovn_controller[146538]: INFO:__main__:Writing out command to execute
Dec 04 10:24:48 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 04 10:24:48 compute-0 ovn_controller[146538]: ++ cat /run_command
Dec 04 10:24:48 compute-0 ovn_controller[146538]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 04 10:24:48 compute-0 ovn_controller[146538]: + ARGS=
Dec 04 10:24:48 compute-0 ovn_controller[146538]: + sudo kolla_copy_cacerts
Dec 04 10:24:48 compute-0 systemd[1]: Started Session c2 of User root.
Dec 04 10:24:48 compute-0 ovn_controller[146538]: + [[ ! -n '' ]]
Dec 04 10:24:48 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 04 10:24:48 compute-0 ovn_controller[146538]: + . kolla_extend_start
Dec 04 10:24:48 compute-0 ovn_controller[146538]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 04 10:24:48 compute-0 ovn_controller[146538]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 04 10:24:48 compute-0 ovn_controller[146538]: + umask 0022
Dec 04 10:24:48 compute-0 ovn_controller[146538]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 04 10:24:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 04 10:24:48 compute-0 NetworkManager[49155]: <info>  [1764843888.9274] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 04 10:24:48 compute-0 NetworkManager[49155]: <info>  [1764843888.9285] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 04 10:24:48 compute-0 NetworkManager[49155]: <info>  [1764843888.9298] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 04 10:24:48 compute-0 NetworkManager[49155]: <info>  [1764843888.9304] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 04 10:24:48 compute-0 NetworkManager[49155]: <info>  [1764843888.9308] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 04 10:24:48 compute-0 kernel: br-int: entered promiscuous mode
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 04 10:24:48 compute-0 ovn_controller[146538]: 2025-12-04T10:24:48Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 04 10:24:48 compute-0 NetworkManager[49155]: <info>  [1764843888.9562] manager: (ovn-bb8252-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 04 10:24:48 compute-0 systemd-udevd[146696]: Network interface NamePolicy= disabled on kernel command line.
Dec 04 10:24:48 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 04 10:24:48 compute-0 systemd-udevd[146698]: Network interface NamePolicy= disabled on kernel command line.
Dec 04 10:24:48 compute-0 NetworkManager[49155]: <info>  [1764843888.9798] device (genev_sys_6081): carrier: link connected
Dec 04 10:24:48 compute-0 NetworkManager[49155]: <info>  [1764843888.9803] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 04 10:24:49 compute-0 sudo[146802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqwazymrzyqjgoqiwvsgyqdoxchgtmys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843888.942218-609-66354112622991/AnsiballZ_command.py'
Dec 04 10:24:49 compute-0 sudo[146802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:49 compute-0 python3.9[146804]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:24:49 compute-0 ovs-vsctl[146805]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 04 10:24:49 compute-0 sudo[146802]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:49 compute-0 sudo[146955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgfgllabcoajetoqqwlirlndieppyugb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843889.6487772-617-34963481790130/AnsiballZ_command.py'
Dec 04 10:24:49 compute-0 sudo[146955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:50 compute-0 ceph-mon[75358]: pgmap v419: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:50 compute-0 python3.9[146957]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:24:50 compute-0 ovs-vsctl[146959]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 04 10:24:50 compute-0 sudo[146955]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:50 compute-0 sudo[147110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abcethrapbqzanlfksfxbtsrsakxwqrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843890.547358-631-50630471546360/AnsiballZ_command.py'
Dec 04 10:24:50 compute-0 sudo[147110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:51 compute-0 python3.9[147112]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:24:51 compute-0 ovs-vsctl[147113]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 04 10:24:51 compute-0 sudo[147110]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:51 compute-0 sshd-session[135311]: Connection closed by 192.168.122.30 port 56372
Dec 04 10:24:51 compute-0 sshd-session[135308]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:24:51 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 04 10:24:51 compute-0 systemd[1]: session-46.scope: Consumed 58.170s CPU time.
Dec 04 10:24:51 compute-0 systemd-logind[798]: Session 46 logged out. Waiting for processes to exit.
Dec 04 10:24:51 compute-0 systemd-logind[798]: Removed session 46.
Dec 04 10:24:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:52 compute-0 ceph-mon[75358]: pgmap v420: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:54 compute-0 ceph-mon[75358]: pgmap v421: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:56 compute-0 ceph-mon[75358]: pgmap v422: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:24:56 compute-0 sshd-session[147138]: Accepted publickey for zuul from 192.168.122.30 port 42986 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:24:56 compute-0 systemd-logind[798]: New session 48 of user zuul.
Dec 04 10:24:56 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec 04 10:24:56 compute-0 sshd-session[147138]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:24:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:57 compute-0 sshd-session[147160]: Invalid user vtatis from 217.154.62.22 port 49794
Dec 04 10:24:57 compute-0 sshd-session[147160]: Received disconnect from 217.154.62.22 port 49794:11: Bye Bye [preauth]
Dec 04 10:24:57 compute-0 sshd-session[147160]: Disconnected from invalid user vtatis 217.154.62.22 port 49794 [preauth]
Dec 04 10:24:57 compute-0 python3.9[147293]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:24:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:24:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:24:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:24:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:24:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:24:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:24:58 compute-0 ceph-mon[75358]: pgmap v423: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:58 compute-0 sudo[147447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwnthdfrabufjvqvizgcymzsgvsyikuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843898.0835795-34-35592554691923/AnsiballZ_file.py'
Dec 04 10:24:58 compute-0 sudo[147447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:58 compute-0 python3.9[147449]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:24:58 compute-0 sudo[147447]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:24:59 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 04 10:24:59 compute-0 systemd[146577]: Activating special unit Exit the Session...
Dec 04 10:24:59 compute-0 systemd[146577]: Stopped target Main User Target.
Dec 04 10:24:59 compute-0 systemd[146577]: Stopped target Basic System.
Dec 04 10:24:59 compute-0 systemd[146577]: Stopped target Paths.
Dec 04 10:24:59 compute-0 systemd[146577]: Stopped target Sockets.
Dec 04 10:24:59 compute-0 systemd[146577]: Stopped target Timers.
Dec 04 10:24:59 compute-0 systemd[146577]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 04 10:24:59 compute-0 systemd[146577]: Closed D-Bus User Message Bus Socket.
Dec 04 10:24:59 compute-0 systemd[146577]: Stopped Create User's Volatile Files and Directories.
Dec 04 10:24:59 compute-0 systemd[146577]: Removed slice User Application Slice.
Dec 04 10:24:59 compute-0 systemd[146577]: Reached target Shutdown.
Dec 04 10:24:59 compute-0 systemd[146577]: Finished Exit the Session.
Dec 04 10:24:59 compute-0 systemd[146577]: Reached target Exit the Session.
Dec 04 10:24:59 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 04 10:24:59 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 04 10:24:59 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 04 10:24:59 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 04 10:24:59 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 04 10:24:59 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 04 10:24:59 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 04 10:24:59 compute-0 sudo[147601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axwhqiznwblwrpiiwyysgyflqvyyznoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843898.8660622-34-76698829141965/AnsiballZ_file.py'
Dec 04 10:24:59 compute-0 sudo[147601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:59 compute-0 python3.9[147603]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:24:59 compute-0 sudo[147601]: pam_unix(sudo:session): session closed for user root
Dec 04 10:24:59 compute-0 sudo[147753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhqjahmkgojmayurwnnuejibqlrioytw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843899.5097978-34-25284134812331/AnsiballZ_file.py'
Dec 04 10:24:59 compute-0 sudo[147753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:24:59 compute-0 python3.9[147755]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:24:59 compute-0 sudo[147753]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:00 compute-0 ceph-mon[75358]: pgmap v424: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:00 compute-0 sudo[147905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtsmiyhcublwklnctcxaiktmidbwowve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843900.1291492-34-116101082841354/AnsiballZ_file.py'
Dec 04 10:25:00 compute-0 sudo[147905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:00 compute-0 python3.9[147907]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:00 compute-0 sudo[147905]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:01 compute-0 sudo[148057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scdaknixzkbwggskmdzzmjpyftfyzaoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843900.804087-34-116845500174350/AnsiballZ_file.py'
Dec 04 10:25:01 compute-0 sudo[148057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:01 compute-0 python3.9[148059]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:01 compute-0 sudo[148057]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:01 compute-0 python3.9[148209]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:25:02 compute-0 ceph-mon[75358]: pgmap v425: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:02 compute-0 sudo[148359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxxfcsyboucqimarhaqbpdpngtfvtksd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843902.19104-78-96095125717473/AnsiballZ_seboolean.py'
Dec 04 10:25:02 compute-0 sudo[148359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:02 compute-0 python3.9[148361]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 04 10:25:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:03 compute-0 ceph-mon[75358]: pgmap v426: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:03 compute-0 sudo[148359]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:04 compute-0 python3.9[148511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:05 compute-0 python3.9[148632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843903.6793542-86-78928593574865/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:05 compute-0 python3.9[148782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:06 compute-0 ceph-mon[75358]: pgmap v427: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:06 compute-0 python3.9[148903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843905.2663631-101-53476565445212/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:06 compute-0 sudo[149053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvylbozsmqbbvesbmtokvenzdlgqhuuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843906.6429722-118-260689915571340/AnsiballZ_setup.py'
Dec 04 10:25:06 compute-0 sudo[149053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:07 compute-0 python3.9[149056]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:25:07 compute-0 ceph-mon[75358]: pgmap v428: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:07 compute-0 sudo[149053]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:08 compute-0 sudo[149138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyeffszydgdnvgmzyrisbliszifrajxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843906.6429722-118-260689915571340/AnsiballZ_dnf.py'
Dec 04 10:25:08 compute-0 sudo[149138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:08 compute-0 python3.9[149140]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:25:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:09 compute-0 sudo[149138]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:10 compute-0 ceph-mon[75358]: pgmap v429: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:10 compute-0 sudo[149291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxigfrfozdebfxsurnppwctwwwkyfqxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843909.7514508-130-41634257257159/AnsiballZ_systemd.py'
Dec 04 10:25:10 compute-0 sudo[149291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:10 compute-0 python3.9[149293]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 04 10:25:10 compute-0 sudo[149291]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:11 compute-0 python3.9[149446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:11 compute-0 python3.9[149567]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843910.9214911-138-74426705696416/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:12 compute-0 ceph-mon[75358]: pgmap v430: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:12 compute-0 sshd[1008]: Timeout before authentication for connection from 14.103.116.173 to 38.102.83.169, pid = 130269
Dec 04 10:25:12 compute-0 python3.9[149717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:12 compute-0 python3.9[149838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843912.0229387-138-220106028479068/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:14 compute-0 ceph-mon[75358]: pgmap v431: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:14 compute-0 python3.9[149988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:14 compute-0 python3.9[150109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843913.7120917-182-37497758006658/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:15 compute-0 python3.9[150259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:15 compute-0 ceph-mon[75358]: pgmap v432: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:15 compute-0 python3.9[150380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843914.8843508-182-179715745958886/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:16 compute-0 python3.9[150530]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:25:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:17 compute-0 sudo[150682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqgvnrhppjxatoghwwwvwzkzaafsmtwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843916.7934697-220-118358928454659/AnsiballZ_file.py'
Dec 04 10:25:17 compute-0 sudo[150682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:17 compute-0 ceph-mon[75358]: pgmap v433: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:17 compute-0 python3.9[150684]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:17 compute-0 sudo[150682]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:18 compute-0 sudo[150834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydmnptfnprorzexvjzsevqtpnvvurkih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843917.765102-228-254423301728288/AnsiballZ_stat.py'
Dec 04 10:25:18 compute-0 sudo[150834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:18 compute-0 python3.9[150836]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:18 compute-0 sudo[150834]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:18 compute-0 sudo[150912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jccrppfbpycaotfhxjijcdhyzutudgyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843917.765102-228-254423301728288/AnsiballZ_file.py'
Dec 04 10:25:18 compute-0 sudo[150912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:18 compute-0 python3.9[150914]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:18 compute-0 sudo[150912]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:18 compute-0 ovn_controller[146538]: 2025-12-04T10:25:18Z|00025|memory|INFO|16000 kB peak resident set size after 30.0 seconds
Dec 04 10:25:18 compute-0 ovn_controller[146538]: 2025-12-04T10:25:18Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 04 10:25:18 compute-0 podman[150992]: 2025-12-04 10:25:18.988076642 +0000 UTC m=+0.091448108 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:25:19 compute-0 sudo[151090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzpecdihzkehfetyqqqkfkzpvgsjvkrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843918.7832599-228-85855089204569/AnsiballZ_stat.py'
Dec 04 10:25:19 compute-0 sudo[151090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:19 compute-0 python3.9[151095]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:19 compute-0 sudo[151090]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:19 compute-0 sudo[151173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhuzsslwkxjrdnwbdxoenebipgphigtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843918.7832599-228-85855089204569/AnsiballZ_file.py'
Dec 04 10:25:19 compute-0 sudo[151173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:19 compute-0 python3.9[151175]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:19 compute-0 sudo[151173]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:20 compute-0 sudo[151325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svtymsrubouzcvtfafykgemvkpurjriz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843919.8238049-251-95640298258695/AnsiballZ_file.py'
Dec 04 10:25:20 compute-0 sudo[151325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:20 compute-0 ceph-mon[75358]: pgmap v434: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:20 compute-0 sshd-session[151121]: Invalid user root2 from 103.149.86.230 port 45370
Dec 04 10:25:20 compute-0 python3.9[151327]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:25:20 compute-0 sudo[151325]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:20 compute-0 sshd-session[151121]: Received disconnect from 103.149.86.230 port 45370:11: Bye Bye [preauth]
Dec 04 10:25:20 compute-0 sshd-session[151121]: Disconnected from invalid user root2 103.149.86.230 port 45370 [preauth]
Dec 04 10:25:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:21 compute-0 sudo[151477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qevnfwkcnxawaabnmywmyicfmnyjkevw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843920.809762-259-142947947815280/AnsiballZ_stat.py'
Dec 04 10:25:21 compute-0 sudo[151477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:21 compute-0 python3.9[151479]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:21 compute-0 sudo[151477]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:21 compute-0 sudo[151555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbayrwbjhpbwzgyvhelzeshtzpizfkym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843920.809762-259-142947947815280/AnsiballZ_file.py'
Dec 04 10:25:21 compute-0 sudo[151555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:21 compute-0 ceph-mon[75358]: pgmap v435: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:21 compute-0 python3.9[151557]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:25:21 compute-0 sudo[151555]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:22 compute-0 sudo[151707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adkajqjjbmqlsamjrmlpmxjlmrsydozv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843921.887961-271-98313856119780/AnsiballZ_stat.py'
Dec 04 10:25:22 compute-0 sudo[151707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:22 compute-0 python3.9[151709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:22 compute-0 sudo[151707]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:22 compute-0 sudo[151785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowpzdhctjznbxxdlzcbntunjxinbmwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843921.887961-271-98313856119780/AnsiballZ_file.py'
Dec 04 10:25:22 compute-0 sudo[151785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:22 compute-0 python3.9[151787]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:25:22 compute-0 sudo[151785]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:25:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5444 writes, 23K keys, 5444 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5444 writes, 791 syncs, 6.88 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5444 writes, 23K keys, 5444 commit groups, 1.0 writes per commit group, ingest: 18.49 MB, 0.03 MB/s
                                           Interval WAL: 5444 writes, 791 syncs, 6.88 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:25:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:23 compute-0 sudo[151937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdlsahhwftkkgapzhpczppldpeobetji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843923.0405173-283-153134053736990/AnsiballZ_systemd.py'
Dec 04 10:25:23 compute-0 sudo[151937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:23 compute-0 python3.9[151939]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:25:23 compute-0 systemd[1]: Reloading.
Dec 04 10:25:23 compute-0 systemd-sysv-generator[151969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:25:23 compute-0 systemd-rc-local-generator[151964]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:25:23 compute-0 ceph-mon[75358]: pgmap v436: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:24 compute-0 sudo[151937]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:24 compute-0 sudo[152126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiayfinrshovtbmkdhxmbfxwsyzlvyan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843924.1733296-291-279139901825929/AnsiballZ_stat.py'
Dec 04 10:25:24 compute-0 sudo[152126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:24 compute-0 python3.9[152128]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:24 compute-0 sudo[152126]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:24 compute-0 sudo[152204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjazmcmdljyjnoeljqlwxpiaexbeeigh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843924.1733296-291-279139901825929/AnsiballZ_file.py'
Dec 04 10:25:24 compute-0 sudo[152204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:25 compute-0 python3.9[152206]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:25:25 compute-0 sudo[152204]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:25 compute-0 sudo[152356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijliiwrpjxdcjwarhyfxidxvvjijotwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843925.28182-303-122091901788737/AnsiballZ_stat.py'
Dec 04 10:25:25 compute-0 sudo[152356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:25 compute-0 python3.9[152358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:25 compute-0 sudo[152356]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:26 compute-0 sudo[152434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozqfpjcgshunmzbtpmmstntokmxbnfte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843925.28182-303-122091901788737/AnsiballZ_file.py'
Dec 04 10:25:26 compute-0 ceph-mon[75358]: pgmap v437: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:26 compute-0 sudo[152434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:26 compute-0 python3.9[152436]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:25:26 compute-0 sudo[152434]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:25:26
Dec 04 10:25:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:25:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:25:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups']
Dec 04 10:25:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:25:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:26 compute-0 sudo[152586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrpoalloetpflcrgtkieffnoxicjlucs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843926.4413936-315-72618164005716/AnsiballZ_systemd.py'
Dec 04 10:25:26 compute-0 sudo[152586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:27 compute-0 python3.9[152588]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:25:27 compute-0 systemd[1]: Reloading.
Dec 04 10:25:27 compute-0 systemd-sysv-generator[152617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:25:27 compute-0 systemd-rc-local-generator[152611]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:25:27 compute-0 ceph-mon[75358]: pgmap v438: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:27 compute-0 systemd[1]: Starting Create netns directory...
Dec 04 10:25:27 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 04 10:25:27 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 04 10:25:27 compute-0 systemd[1]: Finished Create netns directory.
Dec 04 10:25:27 compute-0 sudo[152586]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:25:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:25:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:25:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:25:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:25:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:25:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:25:28 compute-0 sudo[152779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylvgmemdnwxeizvvefjncqmatdyceivd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843927.8321743-325-58224141250680/AnsiballZ_file.py'
Dec 04 10:25:28 compute-0 sudo[152779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:28 compute-0 python3.9[152781]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:28 compute-0 sudo[152779]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:28 compute-0 sudo[152931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iezlpggoedgkoiqbjcajnfydcsgyfsxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843928.5339332-333-46758311429135/AnsiballZ_stat.py'
Dec 04 10:25:28 compute-0 sudo[152931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:29 compute-0 python3.9[152933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:29 compute-0 sudo[152931]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:25:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Cumulative writes: 6918 writes, 28K keys, 6918 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6918 writes, 1283 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6918 writes, 28K keys, 6918 commit groups, 1.0 writes per commit group, ingest: 19.58 MB, 0.03 MB/s
                                           Interval WAL: 6918 writes, 1283 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:25:29 compute-0 sudo[153054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-marxcaivvkzdgnehtbguapkusewvfjhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843928.5339332-333-46758311429135/AnsiballZ_copy.py'
Dec 04 10:25:29 compute-0 sudo[153054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:29 compute-0 python3.9[153056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843928.5339332-333-46758311429135/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:29 compute-0 sudo[153054]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:30 compute-0 ceph-mon[75358]: pgmap v439: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:30 compute-0 sudo[153206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdsdokevgjyczuoxxqqxnfhxpvjfnaml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843929.9352806-350-145190994154751/AnsiballZ_file.py'
Dec 04 10:25:30 compute-0 sudo[153206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:30 compute-0 python3.9[153208]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:25:30 compute-0 sudo[153206]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:30 compute-0 sudo[153358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsublhvrwkjgwuqlzqjbekowwttqgqlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843930.5942059-358-135726253880116/AnsiballZ_stat.py'
Dec 04 10:25:30 compute-0 sudo[153358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:31 compute-0 python3.9[153360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:25:31 compute-0 sudo[153358]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:31 compute-0 ceph-mon[75358]: pgmap v440: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:31 compute-0 sudo[153481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whnanhgidyglllwydibotllezyxyymus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843930.5942059-358-135726253880116/AnsiballZ_copy.py'
Dec 04 10:25:31 compute-0 sudo[153481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:31 compute-0 python3.9[153483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843930.5942059-358-135726253880116/.source.json _original_basename=.pmbtyb4d follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:25:31 compute-0 sudo[153481]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:32 compute-0 sudo[153635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkzzzkzztgneqepnvcvvpuqexnuqdxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843931.7073948-373-458648061042/AnsiballZ_file.py'
Dec 04 10:25:32 compute-0 sudo[153635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:32 compute-0 python3.9[153637]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:25:32 compute-0 sudo[153635]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:32 compute-0 sudo[153787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjguackvsfscczzxdnqzqgdqltqpghpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843932.5958247-381-98511314346009/AnsiballZ_stat.py'
Dec 04 10:25:32 compute-0 sudo[153787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:33 compute-0 sudo[153787]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:33 compute-0 sudo[153910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uptdfkdgeutbzrjqppppwkghumufknuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843932.5958247-381-98511314346009/AnsiballZ_copy.py'
Dec 04 10:25:33 compute-0 sudo[153910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:33 compute-0 sudo[153910]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:33 compute-0 ceph-mon[75358]: pgmap v441: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:34 compute-0 sudo[154062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpmxwblkyqoubdcfiixfuwdiaextvcij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843933.8541348-398-198721530923558/AnsiballZ_container_config_data.py'
Dec 04 10:25:34 compute-0 sudo[154062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:34 compute-0 python3.9[154064]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 04 10:25:34 compute-0 sudo[154062]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:34 compute-0 sshd-session[154065]: Invalid user monitoring from 74.249.218.27 port 53326
Dec 04 10:25:34 compute-0 sshd-session[154065]: Received disconnect from 74.249.218.27 port 53326:11: Bye Bye [preauth]
Dec 04 10:25:34 compute-0 sshd-session[154065]: Disconnected from invalid user monitoring 74.249.218.27 port 53326 [preauth]
Dec 04 10:25:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:35 compute-0 sudo[154216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thveuubkaulkxmlciaglwmpxconhwlvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843934.7682729-407-129378857882074/AnsiballZ_container_config_hash.py'
Dec 04 10:25:35 compute-0 sudo[154216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:35 compute-0 python3.9[154218]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 04 10:25:35 compute-0 sudo[154216]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:35 compute-0 sshd-session[154289]: Invalid user admin1234 from 107.175.213.239 port 54206
Dec 04 10:25:35 compute-0 sshd-session[154289]: Received disconnect from 107.175.213.239 port 54206:11: Bye Bye [preauth]
Dec 04 10:25:35 compute-0 sshd-session[154289]: Disconnected from invalid user admin1234 107.175.213.239 port 54206 [preauth]
Dec 04 10:25:36 compute-0 ceph-mon[75358]: pgmap v442: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:36 compute-0 sudo[154370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfauafcbmyuujhvagiibcwhmrosfoihi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843935.5850704-416-43033172849567/AnsiballZ_podman_container_info.py'
Dec 04 10:25:36 compute-0 sudo[154370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:36 compute-0 python3.9[154372]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 04 10:25:36 compute-0 sudo[154370]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:25:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:25:37 compute-0 sshd-session[153484]: Connection closed by 101.47.163.20 port 60792 [preauth]
Dec 04 10:25:37 compute-0 sudo[154549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htjltnyhybwckrzmxpfifzgphuutsydy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764843937.0402434-429-254105318826201/AnsiballZ_edpm_container_manage.py'
Dec 04 10:25:37 compute-0 sudo[154549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:25:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5475 writes, 24K keys, 5475 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5475 writes, 788 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5475 writes, 24K keys, 5475 commit groups, 1.0 writes per commit group, ingest: 18.45 MB, 0.03 MB/s
                                           Interval WAL: 5475 writes, 788 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:25:37 compute-0 python3[154551]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 04 10:25:38 compute-0 ceph-mon[75358]: pgmap v443: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:40 compute-0 ceph-mon[75358]: pgmap v444: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:41 compute-0 ceph-mon[75358]: pgmap v445: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:43 compute-0 ceph-mgr[75651]: [devicehealth INFO root] Check health
Dec 04 10:25:44 compute-0 ceph-mon[75358]: pgmap v446: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:44 compute-0 sudo[154630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:25:44 compute-0 sudo[154630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:44 compute-0 sudo[154630]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:44 compute-0 sudo[154655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 04 10:25:44 compute-0 sudo[154655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:45 compute-0 ceph-mon[75358]: pgmap v447: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:47 compute-0 ceph-mon[75358]: pgmap v448: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:47 compute-0 sudo[154655]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:25:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:25:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:47 compute-0 sudo[154733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:25:47 compute-0 sudo[154733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:47 compute-0 sudo[154733]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:47 compute-0 sudo[154758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:25:47 compute-0 sudo[154758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:47 compute-0 podman[154565]: 2025-12-04 10:25:47.472272391 +0000 UTC m=+9.592570572 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 04 10:25:47 compute-0 podman[154806]: 2025-12-04 10:25:47.645563467 +0000 UTC m=+0.066316635 container create 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 04 10:25:47 compute-0 podman[154806]: 2025-12-04 10:25:47.605599225 +0000 UTC m=+0.026352553 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 04 10:25:47 compute-0 python3[154551]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 04 10:25:47 compute-0 sudo[154549]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:48 compute-0 sudo[154758]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 04 10:25:48 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:25:48 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:25:48 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:25:48 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:25:48 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:25:48 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:25:48 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:25:48 compute-0 sudo[154977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:25:48 compute-0 sudo[154977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:48 compute-0 sudo[154977]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:48 compute-0 sudo[155030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:25:48 compute-0 sudo[155072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcwwilpdodqemsmojrfxqgjlwzgdjzsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843947.9384434-437-278002395431102/AnsiballZ_stat.py'
Dec 04 10:25:48 compute-0 sudo[155030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:48 compute-0 sudo[155072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:25:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:25:48 compute-0 python3.9[155077]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:25:48 compute-0 sudo[155072]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:48 compute-0 podman[155092]: 2025-12-04 10:25:48.580944666 +0000 UTC m=+0.047764387 container create b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:25:48 compute-0 systemd[1]: Started libpod-conmon-b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778.scope.
Dec 04 10:25:48 compute-0 podman[155092]: 2025-12-04 10:25:48.559580812 +0000 UTC m=+0.026400543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:25:48 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:25:48 compute-0 podman[155092]: 2025-12-04 10:25:48.702350659 +0000 UTC m=+0.169170400 container init b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:25:48 compute-0 podman[155092]: 2025-12-04 10:25:48.717017996 +0000 UTC m=+0.183837717 container start b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:25:48 compute-0 podman[155092]: 2025-12-04 10:25:48.720855946 +0000 UTC m=+0.187675667 container attach b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:25:48 compute-0 clever_villani[155132]: 167 167
Dec 04 10:25:48 compute-0 systemd[1]: libpod-b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778.scope: Deactivated successfully.
Dec 04 10:25:48 compute-0 conmon[155132]: conmon b1cabc71847cb1de5d9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778.scope/container/memory.events
Dec 04 10:25:48 compute-0 podman[155092]: 2025-12-04 10:25:48.727652276 +0000 UTC m=+0.194472007 container died b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ecd84b5dcf3b62c85b509e6749ae35dbde22e44988cea49d47fc5ada4c5a6a0-merged.mount: Deactivated successfully.
Dec 04 10:25:48 compute-0 podman[155092]: 2025-12-04 10:25:48.7808505 +0000 UTC m=+0.247670251 container remove b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:25:48 compute-0 systemd[1]: libpod-conmon-b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778.scope: Deactivated successfully.
Dec 04 10:25:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:49 compute-0 podman[155230]: 2025-12-04 10:25:49.038225 +0000 UTC m=+0.070308518 container create 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:25:49 compute-0 systemd[1]: Started libpod-conmon-39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d.scope.
Dec 04 10:25:49 compute-0 podman[155230]: 2025-12-04 10:25:49.012675958 +0000 UTC m=+0.044759526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:25:49 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:49 compute-0 podman[155230]: 2025-12-04 10:25:49.158783353 +0000 UTC m=+0.190866911 container init 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:25:49 compute-0 podman[155230]: 2025-12-04 10:25:49.172587159 +0000 UTC m=+0.204670717 container start 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:25:49 compute-0 sudo[155316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpogzgkeyeqsfgnmhllehsocqescozos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843948.8056726-446-19877355338340/AnsiballZ_file.py'
Dec 04 10:25:49 compute-0 sudo[155316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:49 compute-0 podman[155230]: 2025-12-04 10:25:49.179123894 +0000 UTC m=+0.211207412 container attach 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:25:49 compute-0 podman[155269]: 2025-12-04 10:25:49.235764869 +0000 UTC m=+0.150264505 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:25:49 compute-0 ceph-mon[75358]: pgmap v449: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:49 compute-0 python3.9[155322]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:25:49 compute-0 sudo[155316]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:49 compute-0 sudo[155413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzipwyxljgzhielsqcteevsrcdflnwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843948.8056726-446-19877355338340/AnsiballZ_stat.py'
Dec 04 10:25:49 compute-0 sudo[155413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:49 compute-0 elated_elbakyan[155272]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:25:49 compute-0 elated_elbakyan[155272]: --> All data devices are unavailable
Dec 04 10:25:49 compute-0 systemd[1]: libpod-39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d.scope: Deactivated successfully.
Dec 04 10:25:49 compute-0 podman[155230]: 2025-12-04 10:25:49.772477736 +0000 UTC m=+0.804561274 container died 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670-merged.mount: Deactivated successfully.
Dec 04 10:25:49 compute-0 python3.9[155417]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:25:49 compute-0 podman[155230]: 2025-12-04 10:25:49.823824147 +0000 UTC m=+0.855907665 container remove 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:25:49 compute-0 systemd[1]: libpod-conmon-39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d.scope: Deactivated successfully.
Dec 04 10:25:49 compute-0 sudo[155413]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:49 compute-0 sudo[155030]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:49 compute-0 sudo[155435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:25:49 compute-0 sudo[155435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:49 compute-0 sudo[155435]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:49 compute-0 sudo[155485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:25:49 compute-0 sudo[155485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:50 compute-0 podman[155578]: 2025-12-04 10:25:50.255506637 +0000 UTC m=+0.044896310 container create 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 04 10:25:50 compute-0 systemd[1]: Started libpod-conmon-579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb.scope.
Dec 04 10:25:50 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:25:50 compute-0 podman[155578]: 2025-12-04 10:25:50.231178654 +0000 UTC m=+0.020568347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:25:50 compute-0 podman[155578]: 2025-12-04 10:25:50.342591252 +0000 UTC m=+0.131980935 container init 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 04 10:25:50 compute-0 podman[155578]: 2025-12-04 10:25:50.349830952 +0000 UTC m=+0.139220615 container start 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:25:50 compute-0 podman[155578]: 2025-12-04 10:25:50.353474208 +0000 UTC m=+0.142863971 container attach 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:25:50 compute-0 stupefied_driscoll[155629]: 167 167
Dec 04 10:25:50 compute-0 systemd[1]: libpod-579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb.scope: Deactivated successfully.
Dec 04 10:25:50 compute-0 podman[155578]: 2025-12-04 10:25:50.355201069 +0000 UTC m=+0.144590732 container died 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-14759ef6adea5c2e69c1aa1c72e9aceedf4b5926d7d84f89b15735dd3b3b9c59-merged.mount: Deactivated successfully.
Dec 04 10:25:50 compute-0 sudo[155673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxogwiyxcyockzdtsrcfuhntpufhupso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843949.9030097-446-215465618958658/AnsiballZ_copy.py'
Dec 04 10:25:50 compute-0 sudo[155673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:50 compute-0 podman[155578]: 2025-12-04 10:25:50.394230029 +0000 UTC m=+0.183619692 container remove 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:25:50 compute-0 systemd[1]: libpod-conmon-579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb.scope: Deactivated successfully.
Dec 04 10:25:50 compute-0 python3.9[155677]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843949.9030097-446-215465618958658/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:25:50 compute-0 podman[155686]: 2025-12-04 10:25:50.59607831 +0000 UTC m=+0.060086339 container create 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:25:50 compute-0 sudo[155673]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:50 compute-0 systemd[1]: Started libpod-conmon-99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb.scope.
Dec 04 10:25:50 compute-0 podman[155686]: 2025-12-04 10:25:50.561725779 +0000 UTC m=+0.025733848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:25:50 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:50 compute-0 podman[155686]: 2025-12-04 10:25:50.694329966 +0000 UTC m=+0.158337995 container init 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:25:50 compute-0 podman[155686]: 2025-12-04 10:25:50.701658679 +0000 UTC m=+0.165666688 container start 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:25:50 compute-0 podman[155686]: 2025-12-04 10:25:50.70552332 +0000 UTC m=+0.169531339 container attach 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:25:50 compute-0 sudo[155780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzdmmpbrgoyyksqswlcmevgrfiegusbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843949.9030097-446-215465618958658/AnsiballZ_systemd.py'
Dec 04 10:25:50 compute-0 sudo[155780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:51 compute-0 brave_blackburn[155709]: {
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:     "0": [
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:         {
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "devices": [
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "/dev/loop3"
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             ],
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_name": "ceph_lv0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_size": "21470642176",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "name": "ceph_lv0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "tags": {
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.cluster_name": "ceph",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.crush_device_class": "",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.encrypted": "0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.objectstore": "bluestore",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.osd_id": "0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.type": "block",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.vdo": "0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.with_tpm": "0"
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             },
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "type": "block",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "vg_name": "ceph_vg0"
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:         }
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:     ],
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:     "1": [
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:         {
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "devices": [
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "/dev/loop4"
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             ],
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_name": "ceph_lv1",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_size": "21470642176",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "name": "ceph_lv1",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "tags": {
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.cluster_name": "ceph",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.crush_device_class": "",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.encrypted": "0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.objectstore": "bluestore",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.osd_id": "1",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.type": "block",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.vdo": "0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.with_tpm": "0"
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             },
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "type": "block",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "vg_name": "ceph_vg1"
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:         }
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:     ],
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:     "2": [
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:         {
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "devices": [
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "/dev/loop5"
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             ],
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_name": "ceph_lv2",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_size": "21470642176",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "name": "ceph_lv2",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "tags": {
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.cluster_name": "ceph",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.crush_device_class": "",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.encrypted": "0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.objectstore": "bluestore",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.osd_id": "2",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.type": "block",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.vdo": "0",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:                 "ceph.with_tpm": "0"
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             },
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "type": "block",
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:             "vg_name": "ceph_vg2"
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:         }
Dec 04 10:25:51 compute-0 brave_blackburn[155709]:     ]
Dec 04 10:25:51 compute-0 brave_blackburn[155709]: }
Dec 04 10:25:51 compute-0 systemd[1]: libpod-99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb.scope: Deactivated successfully.
Dec 04 10:25:51 compute-0 podman[155686]: 2025-12-04 10:25:51.054770816 +0000 UTC m=+0.518778865 container died 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d-merged.mount: Deactivated successfully.
Dec 04 10:25:51 compute-0 podman[155686]: 2025-12-04 10:25:51.107845878 +0000 UTC m=+0.571853897 container remove 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:25:51 compute-0 systemd[1]: libpod-conmon-99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb.scope: Deactivated successfully.
Dec 04 10:25:51 compute-0 python3.9[155782]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 04 10:25:51 compute-0 sudo[155485]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:51 compute-0 systemd[1]: Reloading.
Dec 04 10:25:51 compute-0 systemd-sysv-generator[155854]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:25:51 compute-0 systemd-rc-local-generator[155851]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:25:51 compute-0 sudo[155800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:25:51 compute-0 sudo[155800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:51 compute-0 sudo[155800]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:51 compute-0 sudo[155780]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:51 compute-0 sudo[155860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:25:51 compute-0 sudo[155860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:51 compute-0 sudo[155958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdjlyemcwormusnywvhruhauysyihcrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843949.9030097-446-215465618958658/AnsiballZ_systemd.py'
Dec 04 10:25:51 compute-0 sudo[155958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:25:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:51 compute-0 podman[155973]: 2025-12-04 10:25:51.876950675 +0000 UTC m=+0.060229021 container create bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:25:51 compute-0 systemd[1]: Started libpod-conmon-bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e.scope.
Dec 04 10:25:51 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:25:51 compute-0 podman[155973]: 2025-12-04 10:25:51.854735782 +0000 UTC m=+0.038014128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:25:51 compute-0 podman[155973]: 2025-12-04 10:25:51.959147854 +0000 UTC m=+0.142426220 container init bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:25:51 compute-0 podman[155973]: 2025-12-04 10:25:51.967705876 +0000 UTC m=+0.150984212 container start bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:25:51 compute-0 podman[155973]: 2025-12-04 10:25:51.971330822 +0000 UTC m=+0.154609178 container attach bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:25:51 compute-0 admiring_einstein[155989]: 167 167
Dec 04 10:25:51 compute-0 systemd[1]: libpod-bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e.scope: Deactivated successfully.
Dec 04 10:25:51 compute-0 podman[155973]: 2025-12-04 10:25:51.974048016 +0000 UTC m=+0.157326352 container died bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:25:52 compute-0 ceph-mon[75358]: pgmap v450: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0070fceb6f9a6d51fe2e5baeeec41cb9522f9539351a12f70ee9df63e7c2cc7-merged.mount: Deactivated successfully.
Dec 04 10:25:52 compute-0 podman[155973]: 2025-12-04 10:25:52.015509753 +0000 UTC m=+0.198788099 container remove bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:25:52 compute-0 systemd[1]: libpod-conmon-bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e.scope: Deactivated successfully.
Dec 04 10:25:52 compute-0 python3.9[155960]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:25:52 compute-0 systemd[1]: Reloading.
Dec 04 10:25:52 compute-0 podman[156016]: 2025-12-04 10:25:52.194999677 +0000 UTC m=+0.047814160 container create 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:25:52 compute-0 systemd-sysv-generator[156059]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:25:52 compute-0 systemd-rc-local-generator[156056]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:25:52 compute-0 podman[156016]: 2025-12-04 10:25:52.176216593 +0000 UTC m=+0.029031096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:25:52 compute-0 systemd[1]: Started libpod-conmon-6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267.scope.
Dec 04 10:25:52 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 04 10:25:52 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:52 compute-0 podman[156016]: 2025-12-04 10:25:52.510729402 +0000 UTC m=+0.363543905 container init 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:25:52 compute-0 podman[156016]: 2025-12-04 10:25:52.525874129 +0000 UTC m=+0.378688642 container start 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:25:52 compute-0 podman[156016]: 2025-12-04 10:25:52.53142705 +0000 UTC m=+0.384241563 container attach 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:25:52 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b241c4d4c1b7ae85a99193280a3cd8c6217a74fed81c66a9a358d4273cda809/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b241c4d4c1b7ae85a99193280a3cd8c6217a74fed81c66a9a358d4273cda809/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 04 10:25:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567.
Dec 04 10:25:52 compute-0 podman[156074]: 2025-12-04 10:25:52.642840477 +0000 UTC m=+0.159497412 container init 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: + sudo -E kolla_set_configs
Dec 04 10:25:52 compute-0 podman[156074]: 2025-12-04 10:25:52.670725645 +0000 UTC m=+0.187382570 container start 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 04 10:25:52 compute-0 edpm-start-podman-container[156074]: ovn_metadata_agent
Dec 04 10:25:52 compute-0 edpm-start-podman-container[156072]: Creating additional drop-in dependency for "ovn_metadata_agent" (292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567)
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Validating config file
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Copying service configuration files
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Writing out command to execute
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 04 10:25:52 compute-0 podman[156096]: 2025-12-04 10:25:52.764853846 +0000 UTC m=+0.082388614 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: ++ cat /run_command
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: + CMD=neutron-ovn-metadata-agent
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: + ARGS=
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: + sudo kolla_copy_cacerts
Dec 04 10:25:52 compute-0 systemd[1]: Reloading.
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: + [[ ! -n '' ]]
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: + . kolla_extend_start
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: Running command: 'neutron-ovn-metadata-agent'
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: + umask 0022
Dec 04 10:25:52 compute-0 ovn_metadata_agent[156090]: + exec neutron-ovn-metadata-agent
Dec 04 10:25:52 compute-0 systemd-rc-local-generator[156179]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:25:52 compute-0 systemd-sysv-generator[156182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:25:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:53 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 04 10:25:53 compute-0 sudo[155958]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:53 compute-0 lvm[156276]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:25:53 compute-0 lvm[156276]: VG ceph_vg1 finished
Dec 04 10:25:53 compute-0 lvm[156275]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:25:53 compute-0 lvm[156275]: VG ceph_vg0 finished
Dec 04 10:25:53 compute-0 lvm[156278]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:25:53 compute-0 lvm[156278]: VG ceph_vg2 finished
Dec 04 10:25:53 compute-0 romantic_banzai[156070]: {}
Dec 04 10:25:53 compute-0 systemd[1]: libpod-6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267.scope: Deactivated successfully.
Dec 04 10:25:53 compute-0 systemd[1]: libpod-6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267.scope: Consumed 1.395s CPU time.
Dec 04 10:25:53 compute-0 podman[156016]: 2025-12-04 10:25:53.41002465 +0000 UTC m=+1.262839163 container died 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6-merged.mount: Deactivated successfully.
Dec 04 10:25:53 compute-0 podman[156016]: 2025-12-04 10:25:53.476348225 +0000 UTC m=+1.329162748 container remove 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:25:53 compute-0 systemd[1]: libpod-conmon-6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267.scope: Deactivated successfully.
Dec 04 10:25:53 compute-0 sshd-session[147141]: Connection closed by 192.168.122.30 port 42986
Dec 04 10:25:53 compute-0 sshd-session[147138]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:25:53 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec 04 10:25:53 compute-0 systemd[1]: session-48.scope: Consumed 56.768s CPU time.
Dec 04 10:25:53 compute-0 systemd-logind[798]: Session 48 logged out. Waiting for processes to exit.
Dec 04 10:25:53 compute-0 systemd-logind[798]: Removed session 48.
Dec 04 10:25:53 compute-0 sudo[155860]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:25:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:25:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:53 compute-0 sudo[156295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:25:53 compute-0 sudo[156295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:25:53 compute-0 sudo[156295]: pam_unix(sudo:session): session closed for user root
Dec 04 10:25:54 compute-0 ceph-mon[75358]: pgmap v451: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.842 156095 INFO neutron.common.config [-] Logging enabled!
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.843 156095 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.843 156095 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.843 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.843 156095 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.887 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.888 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.888 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.888 156095 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.889 156095 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.901 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 565580d5-3422-4e11-b563-3f1a3db67238 (UUID: 565580d5-3422-4e11-b563-3f1a3db67238) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.927 156095 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.927 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.928 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.928 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.930 156095 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.937 156095 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 04 10:25:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.942 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '565580d5-3422-4e11-b563-3f1a3db67238'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f5e2acc0be0>], external_ids={}, name=565580d5-3422-4e11-b563-3f1a3db67238, nb_cfg_timestamp=1764843896953, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.943 156095 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f5e2ac3a310>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.944 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.944 156095 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.944 156095 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.944 156095 INFO oslo_service.service [-] Starting 1 workers
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.948 156095 DEBUG oslo_service.service [-] Started child 156321 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.952 156095 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpasqnjo3q/privsep.sock']
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.955 156321 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-169241'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 04 10:25:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.998 156321 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.999 156321 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.000 156321 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.005 156321 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.014 156321 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.028 156321 INFO eventlet.wsgi.server [-] (156321) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 04 10:25:55 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.661 156095 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.662 156095 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpasqnjo3q/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.500 156326 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.508 156326 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.513 156326 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.514 156326 INFO oslo.privsep.daemon [-] privsep daemon running as pid 156326
Dec 04 10:25:55 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.666 156326 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2aae2-f991-42f3-be8e-23af56f86d71]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 04 10:25:56 compute-0 ceph-mon[75358]: pgmap v452: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.222 156326 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.223 156326 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.223 156326 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:25:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.874 156326 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2250c3-ba22-49a4-8689-223efaca0ec9]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.877 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, column=external_ids, values=({'neutron:ovn-metadata-id': 'c6ca2f93-5873-55c3-abb7-70ed124c9f2a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.886 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.893 156095 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.893 156095 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.920 156095 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.920 156095 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:25:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 04 10:25:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:25:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:25:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:25:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:25:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:25:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:25:58 compute-0 ceph-mon[75358]: pgmap v453: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:25:59 compute-0 sshd-session[156331]: Accepted publickey for zuul from 192.168.122.30 port 51686 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:25:59 compute-0 systemd-logind[798]: New session 49 of user zuul.
Dec 04 10:25:59 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec 04 10:25:59 compute-0 sshd-session[156331]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:26:00 compute-0 ceph-mon[75358]: pgmap v454: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:00 compute-0 python3.9[156486]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:26:00 compute-0 sshd-session[156343]: Invalid user kingbase from 103.179.218.243 port 41906
Dec 04 10:26:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:01 compute-0 sshd-session[156343]: Received disconnect from 103.179.218.243 port 41906:11: Bye Bye [preauth]
Dec 04 10:26:01 compute-0 sshd-session[156343]: Disconnected from invalid user kingbase 103.179.218.243 port 41906 [preauth]
Dec 04 10:26:01 compute-0 sudo[156640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evfpsispsbsjbygxssqilbcwldztptwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843960.9004085-34-67303055741644/AnsiballZ_command.py'
Dec 04 10:26:01 compute-0 sudo[156640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:01 compute-0 python3.9[156642]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:26:01 compute-0 sudo[156640]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:02 compute-0 ceph-mon[75358]: pgmap v455: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:02 compute-0 sudo[156804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycmhkehxyfqqvpvfhxlzmnnqopsbboii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843961.9729748-45-98347043527238/AnsiballZ_systemd_service.py'
Dec 04 10:26:02 compute-0 sudo[156804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:03 compute-0 python3.9[156806]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 04 10:26:03 compute-0 systemd[1]: Reloading.
Dec 04 10:26:03 compute-0 systemd-rc-local-generator[156829]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:26:03 compute-0 systemd-sysv-generator[156833]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:26:03 compute-0 sudo[156804]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:04 compute-0 ceph-mon[75358]: pgmap v456: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:04 compute-0 python3.9[156991]: ansible-ansible.builtin.service_facts Invoked
Dec 04 10:26:04 compute-0 network[157008]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 04 10:26:04 compute-0 network[157009]: 'network-scripts' will be removed from distribution in near future.
Dec 04 10:26:04 compute-0 network[157010]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 04 10:26:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:06 compute-0 ceph-mon[75358]: pgmap v457: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:07 compute-0 sudo[157271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdpmmkarroilsnkzhvnldxivgtxoxxdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843967.4999888-64-9425274851433/AnsiballZ_systemd_service.py'
Dec 04 10:26:07 compute-0 sudo[157271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:09 compute-0 ceph-mon[75358]: pgmap v458: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:09 compute-0 python3.9[157273]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:26:09 compute-0 sudo[157271]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:09 compute-0 sudo[157426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivegzkjxsqiptsqtlyhqdxjivmbwedva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843969.4287453-64-173265235849920/AnsiballZ_systemd_service.py'
Dec 04 10:26:09 compute-0 sudo[157426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:10 compute-0 python3.9[157428]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:26:10 compute-0 sudo[157426]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:10 compute-0 ceph-mon[75358]: pgmap v459: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:10 compute-0 sudo[157579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ognydpysclspgtzfnjgxdunwodkxrpjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843970.1706302-64-265574261415553/AnsiballZ_systemd_service.py'
Dec 04 10:26:10 compute-0 sudo[157579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:10 compute-0 python3.9[157581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:26:10 compute-0 sudo[157579]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:11 compute-0 sudo[157732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvabkqnleaonrpasvkckqjepseebbhwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843970.9394567-64-137901542360601/AnsiballZ_systemd_service.py'
Dec 04 10:26:11 compute-0 sudo[157732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:11 compute-0 ceph-mon[75358]: pgmap v460: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:11 compute-0 python3.9[157734]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:26:11 compute-0 sudo[157732]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:12 compute-0 sudo[157885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdcyybvriezhfsjalwdowatoweymkgmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843971.9472249-64-7591731771685/AnsiballZ_systemd_service.py'
Dec 04 10:26:12 compute-0 sudo[157885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:12 compute-0 python3.9[157887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:26:12 compute-0 sudo[157885]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:12 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:26:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:12 compute-0 sudo[158039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxcjsccrxbjgmwgmdnolaefpvluwjgfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843972.7137332-64-104242775288609/AnsiballZ_systemd_service.py'
Dec 04 10:26:12 compute-0 sudo[158039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:13 compute-0 python3.9[158041]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:26:13 compute-0 sudo[158039]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:13 compute-0 sudo[158192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuqofhuyuyabhyzfljmrsrmsmdrfmxin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843973.4485307-64-194680865183000/AnsiballZ_systemd_service.py'
Dec 04 10:26:13 compute-0 sudo[158192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:14 compute-0 ceph-mon[75358]: pgmap v461: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:14 compute-0 python3.9[158194]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:26:14 compute-0 sudo[158192]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:14 compute-0 sudo[158345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkugvhjhuhunflqwzyivqpusmlmabhjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843974.34948-116-259203301906092/AnsiballZ_file.py'
Dec 04 10:26:14 compute-0 sudo[158345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:15 compute-0 python3.9[158347]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:15 compute-0 sudo[158345]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:15 compute-0 sudo[158497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luaqkavbbmbiojcqxnmmduedcjxfofsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843975.1786094-116-116756289993679/AnsiballZ_file.py'
Dec 04 10:26:15 compute-0 sudo[158497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:15 compute-0 python3.9[158499]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:15 compute-0 sudo[158497]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:16 compute-0 ceph-mon[75358]: pgmap v462: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:16 compute-0 sudo[158649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uayxfsuusssdaoalhuxbklrutdzlaezp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843975.8561997-116-220593786674184/AnsiballZ_file.py'
Dec 04 10:26:16 compute-0 sudo[158649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:16 compute-0 python3.9[158651]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:16 compute-0 sudo[158649]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:16 compute-0 sudo[158801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fegqwlgmfsynsiqtfcfaqaycmzlvxbvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843976.424629-116-26633240173108/AnsiballZ_file.py'
Dec 04 10:26:16 compute-0 sudo[158801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:17 compute-0 python3.9[158803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:17 compute-0 sudo[158801]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:17 compute-0 ceph-mon[75358]: pgmap v463: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:17 compute-0 sudo[158953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inpctrnoipylnmzkvhiwuqnouaulnvqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843977.4023402-116-33918491387478/AnsiballZ_file.py'
Dec 04 10:26:17 compute-0 sudo[158953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:17 compute-0 python3.9[158955]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:17 compute-0 sudo[158953]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:18 compute-0 sudo[159105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phsoexkokpqiodqfqczjkkcxwgswormu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843978.024399-116-40199393284453/AnsiballZ_file.py'
Dec 04 10:26:18 compute-0 sudo[159105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:18 compute-0 python3.9[159107]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:18 compute-0 sudo[159105]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:18 compute-0 sudo[159257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcptaqyldelkwqgwazdpnhaygalnbwaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843978.6174383-116-159154767679235/AnsiballZ_file.py'
Dec 04 10:26:18 compute-0 sudo[159257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:19 compute-0 python3.9[159259]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:19 compute-0 sudo[159257]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:19 compute-0 ceph-mon[75358]: pgmap v464: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:19 compute-0 sudo[159420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hovvtfwymhovjitymmoocfaasiulltlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843979.6826668-166-240011454271163/AnsiballZ_file.py'
Dec 04 10:26:19 compute-0 sudo[159420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:20 compute-0 podman[159382]: 2025-12-04 10:26:20.069866866 +0000 UTC m=+0.180208837 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 04 10:26:20 compute-0 python3.9[159429]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:20 compute-0 sudo[159420]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:20 compute-0 sudo[159587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrgibynylbqjcnqzatmyycfzppubapuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843980.27713-166-72570251365877/AnsiballZ_file.py'
Dec 04 10:26:20 compute-0 sudo[159587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:20 compute-0 python3.9[159589]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:20 compute-0 sudo[159587]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:20 compute-0 sshd-session[157090]: Invalid user user from 51.52.210.77 port 35167
Dec 04 10:26:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:21 compute-0 sudo[159739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txqczatnljukabbrdlhfnxgvekjazlwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843980.9475355-166-148306571876801/AnsiballZ_file.py'
Dec 04 10:26:21 compute-0 sudo[159739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:21 compute-0 python3.9[159741]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:21 compute-0 sudo[159739]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:21 compute-0 sudo[159891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szoryhopjvubtryjucfjgkfuualmqxvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843981.5361936-166-253449579924479/AnsiballZ_file.py'
Dec 04 10:26:21 compute-0 sudo[159891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:22 compute-0 python3.9[159893]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:22 compute-0 sudo[159891]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:22 compute-0 ceph-mon[75358]: pgmap v465: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:22 compute-0 sudo[160043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prdlkgvsgrkozevvchpghycpmalqndra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843982.298682-166-170988178373579/AnsiballZ_file.py'
Dec 04 10:26:22 compute-0 sudo[160043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:22 compute-0 python3.9[160045]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:22 compute-0 sudo[160043]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:22 compute-0 podman[160076]: 2025-12-04 10:26:22.957004408 +0000 UTC m=+0.058645176 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:26:23 compute-0 sudo[160214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnqyuiwaaxuovzfiejmovvkysiezkour ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843982.8890724-166-204227480247667/AnsiballZ_file.py'
Dec 04 10:26:23 compute-0 sudo[160214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:23 compute-0 ceph-mon[75358]: pgmap v466: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:23 compute-0 python3.9[160216]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:23 compute-0 sudo[160214]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:23 compute-0 sudo[160366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljniintgppbemmizcwcqtyacfoqtjfby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843983.4561467-166-217161511644185/AnsiballZ_file.py'
Dec 04 10:26:23 compute-0 sudo[160366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:23 compute-0 python3.9[160368]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:26:23 compute-0 sudo[160366]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:24 compute-0 sudo[160518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrsgjpgclfxnjrazdsbnhjgijemnpzgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843984.1171227-217-279659601416134/AnsiballZ_command.py'
Dec 04 10:26:24 compute-0 sudo[160518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:24 compute-0 python3.9[160520]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:26:24 compute-0 sudo[160518]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:24 compute-0 sshd-session[157090]: Connection closed by invalid user user 51.52.210.77 port 35167 [preauth]
Dec 04 10:26:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:25 compute-0 python3.9[160672]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 04 10:26:25 compute-0 sudo[160822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjiitadewicqysvbgdifordufxvtnjys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843985.577481-235-7155423145233/AnsiballZ_systemd_service.py'
Dec 04 10:26:25 compute-0 sudo[160822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:26 compute-0 ceph-mon[75358]: pgmap v467: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:26 compute-0 python3.9[160824]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 04 10:26:26 compute-0 systemd[1]: Reloading.
Dec 04 10:26:26 compute-0 systemd-rc-local-generator[160853]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:26:26 compute-0 systemd-sysv-generator[160857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:26:26 compute-0 sudo[160822]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:26:26
Dec 04 10:26:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:26:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:26:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes', '.rgw.root', 'default.rgw.log']
Dec 04 10:26:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:26:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:26 compute-0 sudo[161010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbaszembysaspcmhmfhrqjrtgydvrrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843986.59077-243-265469324513793/AnsiballZ_command.py'
Dec 04 10:26:26 compute-0 sudo[161010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:26 compute-0 sshd-session[160859]: Invalid user alex from 217.154.62.22 port 60552
Dec 04 10:26:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:26 compute-0 sshd-session[160859]: Received disconnect from 217.154.62.22 port 60552:11: Bye Bye [preauth]
Dec 04 10:26:26 compute-0 sshd-session[160859]: Disconnected from invalid user alex 217.154.62.22 port 60552 [preauth]
Dec 04 10:26:27 compute-0 python3.9[161012]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:26:27 compute-0 sudo[161010]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:27 compute-0 sudo[161163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azgdffobynrbzfxusdexqsmaisewjsrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843987.1926641-243-258988333719598/AnsiballZ_command.py'
Dec 04 10:26:27 compute-0 sudo[161163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:27 compute-0 python3.9[161165]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:26:27 compute-0 sudo[161163]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:26:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:26:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:26:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:26:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:26:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:26:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:26:28 compute-0 ceph-mon[75358]: pgmap v468: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:28 compute-0 sudo[161316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luylmricahgsucdybmggfeiaigtjmtoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843987.9385467-243-138949608485782/AnsiballZ_command.py'
Dec 04 10:26:28 compute-0 sudo[161316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:28 compute-0 python3.9[161318]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:26:28 compute-0 sudo[161316]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:28 compute-0 sudo[161469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgousqzqtzjrftxtjjmnxawxmpumrhea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843988.5603147-243-212464508034376/AnsiballZ_command.py'
Dec 04 10:26:28 compute-0 sudo[161469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:29 compute-0 python3.9[161471]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:26:29 compute-0 sudo[161469]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:29 compute-0 ceph-mon[75358]: pgmap v469: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:29 compute-0 sudo[161622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvylhcznxjcjjdxtcujrldhvtoyocqlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843989.1573815-243-218421345880901/AnsiballZ_command.py'
Dec 04 10:26:29 compute-0 sudo[161622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:29 compute-0 python3.9[161624]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:26:29 compute-0 sudo[161622]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:29 compute-0 sudo[161775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqyoxdonjvoxwnvvnyszhykpbrnzgszn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843989.7481778-243-144275846286642/AnsiballZ_command.py'
Dec 04 10:26:29 compute-0 sudo[161775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:30 compute-0 python3.9[161777]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:26:30 compute-0 sudo[161775]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:30 compute-0 sudo[161928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljipuajzdyvosmqoinphausvexccfqpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843990.329601-243-149086183700608/AnsiballZ_command.py'
Dec 04 10:26:30 compute-0 sudo[161928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:30 compute-0 python3.9[161930]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:26:30 compute-0 sudo[161928]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:31 compute-0 sudo[162081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzhlbygdylugiizbyedggpxzpxtobykn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843991.155445-297-91075152520584/AnsiballZ_getent.py'
Dec 04 10:26:31 compute-0 sudo[162081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:31 compute-0 python3.9[162083]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 04 10:26:31 compute-0 sudo[162081]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:32 compute-0 ceph-mon[75358]: pgmap v470: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:32 compute-0 sudo[162234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbaoerfmidvjojqpbvwgqjuvsjzgcdvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843992.0458891-305-46557841338100/AnsiballZ_group.py'
Dec 04 10:26:32 compute-0 sudo[162234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:32 compute-0 python3.9[162236]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 04 10:26:32 compute-0 groupadd[162237]: group added to /etc/group: name=libvirt, GID=42473
Dec 04 10:26:32 compute-0 groupadd[162237]: group added to /etc/gshadow: name=libvirt
Dec 04 10:26:32 compute-0 groupadd[162237]: new group: name=libvirt, GID=42473
Dec 04 10:26:32 compute-0 sudo[162234]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:33 compute-0 sudo[162392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyspwzdcezvooimoafdpqcgrujxoxxuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843992.9829624-313-202389795162838/AnsiballZ_user.py'
Dec 04 10:26:33 compute-0 sudo[162392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:33 compute-0 python3.9[162394]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 04 10:26:33 compute-0 useradd[162396]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 04 10:26:33 compute-0 sudo[162392]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:34 compute-0 ceph-mon[75358]: pgmap v471: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:34 compute-0 sudo[162552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhfeuzitnudxcdcwasszqwiaeqptdffc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843994.0216587-324-146679219161964/AnsiballZ_setup.py'
Dec 04 10:26:34 compute-0 sudo[162552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:34 compute-0 python3.9[162554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:26:34 compute-0 sudo[162552]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:35 compute-0 sudo[162636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syvednovminhnzxhqdakoragvwgxjxgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764843994.0216587-324-146679219161964/AnsiballZ_dnf.py'
Dec 04 10:26:35 compute-0 sudo[162636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:26:35 compute-0 python3.9[162638]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:26:36 compute-0 ceph-mon[75358]: pgmap v472: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:26:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:26:37 compute-0 ceph-mon[75358]: pgmap v473: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:39 compute-0 ceph-mon[75358]: pgmap v474: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:41 compute-0 ceph-mon[75358]: pgmap v475: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:45 compute-0 ceph-mon[75358]: pgmap v476: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:46 compute-0 ceph-mon[75358]: pgmap v477: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:48 compute-0 ceph-mon[75358]: pgmap v478: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:50 compute-0 ceph-mon[75358]: pgmap v479: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:51 compute-0 podman[162652]: 2025-12-04 10:26:51.202049799 +0000 UTC m=+0.303130111 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 04 10:26:51 compute-0 ceph-mon[75358]: pgmap v480: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:52 compute-0 sshd-session[162678]: Invalid user radarr from 74.249.218.27 port 36568
Dec 04 10:26:52 compute-0 sshd-session[162678]: Received disconnect from 74.249.218.27 port 36568:11: Bye Bye [preauth]
Dec 04 10:26:52 compute-0 sshd-session[162678]: Disconnected from invalid user radarr 74.249.218.27 port 36568 [preauth]
Dec 04 10:26:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:53 compute-0 sudo[162680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:26:53 compute-0 sudo[162680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:26:53 compute-0 sudo[162680]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:53 compute-0 sudo[162710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:26:53 compute-0 sudo[162710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:26:53 compute-0 podman[162704]: 2025-12-04 10:26:53.8016764 +0000 UTC m=+0.058361749 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 04 10:26:54 compute-0 ceph-mon[75358]: pgmap v481: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:26:54 compute-0 sudo[162710]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:26:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:26:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:26:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:26:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:26:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:26:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:26:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:26:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:26:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:26:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:26:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:26:54 compute-0 sudo[162792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:26:54 compute-0 sudo[162792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:26:54 compute-0 sudo[162792]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:54 compute-0 sudo[162820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:26:54 compute-0 sudo[162820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:26:54 compute-0 podman[162868]: 2025-12-04 10:26:54.881518505 +0000 UTC m=+0.052548012 container create 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:26:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:26:54.890 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:26:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:26:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:26:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:26:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:26:54 compute-0 systemd[1]: Started libpod-conmon-3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee.scope.
Dec 04 10:26:54 compute-0 podman[162868]: 2025-12-04 10:26:54.853305879 +0000 UTC m=+0.024335406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:26:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Dec 04 10:26:54 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:26:54 compute-0 podman[162868]: 2025-12-04 10:26:54.990060959 +0000 UTC m=+0.161090486 container init 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:26:54 compute-0 podman[162868]: 2025-12-04 10:26:54.999163134 +0000 UTC m=+0.170192641 container start 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:26:55 compute-0 podman[162868]: 2025-12-04 10:26:55.002613626 +0000 UTC m=+0.173643133 container attach 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:26:55 compute-0 musing_mcnulty[162887]: 167 167
Dec 04 10:26:55 compute-0 systemd[1]: libpod-3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee.scope: Deactivated successfully.
Dec 04 10:26:55 compute-0 podman[162868]: 2025-12-04 10:26:55.006012126 +0000 UTC m=+0.177041633 container died 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:26:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:26:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:26:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:26:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:26:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:26:55 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-faeb26bc59c590652d44895cb3755ba3b3a4b8dc99701c5deb178b9ea288300b-merged.mount: Deactivated successfully.
Dec 04 10:26:55 compute-0 podman[162868]: 2025-12-04 10:26:55.053684472 +0000 UTC m=+0.224713979 container remove 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:26:55 compute-0 systemd[1]: libpod-conmon-3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee.scope: Deactivated successfully.
Dec 04 10:26:55 compute-0 podman[162920]: 2025-12-04 10:26:55.233628802 +0000 UTC m=+0.049042419 container create 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:26:55 compute-0 systemd[1]: Started libpod-conmon-23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c.scope.
Dec 04 10:26:55 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:26:55 compute-0 podman[162920]: 2025-12-04 10:26:55.213842785 +0000 UTC m=+0.029256412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:55 compute-0 podman[162920]: 2025-12-04 10:26:55.323463944 +0000 UTC m=+0.138877561 container init 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:26:55 compute-0 podman[162920]: 2025-12-04 10:26:55.331718309 +0000 UTC m=+0.147131926 container start 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:26:55 compute-0 podman[162920]: 2025-12-04 10:26:55.337083176 +0000 UTC m=+0.152496843 container attach 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 04 10:26:55 compute-0 boring_spence[162941]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:26:55 compute-0 boring_spence[162941]: --> All data devices are unavailable
Dec 04 10:26:55 compute-0 systemd[1]: libpod-23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c.scope: Deactivated successfully.
Dec 04 10:26:55 compute-0 podman[162979]: 2025-12-04 10:26:55.880644614 +0000 UTC m=+0.029558199 container died 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb-merged.mount: Deactivated successfully.
Dec 04 10:26:55 compute-0 podman[162979]: 2025-12-04 10:26:55.922725408 +0000 UTC m=+0.071638973 container remove 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:26:55 compute-0 systemd[1]: libpod-conmon-23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c.scope: Deactivated successfully.
Dec 04 10:26:55 compute-0 sudo[162820]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:56 compute-0 sudo[163000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:26:56 compute-0 sudo[163000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:26:56 compute-0 sudo[163000]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:56 compute-0 ceph-mon[75358]: pgmap v482: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Dec 04 10:26:56 compute-0 sudo[163027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:26:56 compute-0 sudo[163027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:26:56 compute-0 podman[163076]: 2025-12-04 10:26:56.35254705 +0000 UTC m=+0.046890679 container create ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 04 10:26:56 compute-0 systemd[1]: Started libpod-conmon-ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3.scope.
Dec 04 10:26:56 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:26:56 compute-0 podman[163076]: 2025-12-04 10:26:56.332919716 +0000 UTC m=+0.027263355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:26:56 compute-0 podman[163076]: 2025-12-04 10:26:56.436669617 +0000 UTC m=+0.131013256 container init ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:26:56 compute-0 podman[163076]: 2025-12-04 10:26:56.4431613 +0000 UTC m=+0.137504919 container start ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:26:56 compute-0 elegant_cohen[163098]: 167 167
Dec 04 10:26:56 compute-0 podman[163076]: 2025-12-04 10:26:56.447289867 +0000 UTC m=+0.141633516 container attach ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:26:56 compute-0 systemd[1]: libpod-ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3.scope: Deactivated successfully.
Dec 04 10:26:56 compute-0 conmon[163098]: conmon ef0f091fde96c8d19563 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3.scope/container/memory.events
Dec 04 10:26:56 compute-0 podman[163076]: 2025-12-04 10:26:56.449878038 +0000 UTC m=+0.144221677 container died ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-db1760ef700d410dca131626fcb47b105a93cf2e4ca17d1123cf36277483663e-merged.mount: Deactivated successfully.
Dec 04 10:26:56 compute-0 podman[163076]: 2025-12-04 10:26:56.488157893 +0000 UTC m=+0.182501532 container remove ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:26:56 compute-0 systemd[1]: libpod-conmon-ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3.scope: Deactivated successfully.
Dec 04 10:26:56 compute-0 podman[163130]: 2025-12-04 10:26:56.659324836 +0000 UTC m=+0.042191148 container create b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:26:56 compute-0 systemd[1]: Started libpod-conmon-b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73.scope.
Dec 04 10:26:56 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:26:56 compute-0 podman[163130]: 2025-12-04 10:26:56.639862476 +0000 UTC m=+0.022728808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:56 compute-0 podman[163130]: 2025-12-04 10:26:56.754017293 +0000 UTC m=+0.136883615 container init b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:26:56 compute-0 podman[163130]: 2025-12-04 10:26:56.760619208 +0000 UTC m=+0.143485520 container start b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 04 10:26:56 compute-0 podman[163130]: 2025-12-04 10:26:56.763837784 +0000 UTC m=+0.146704146 container attach b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:26:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:26:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Dec 04 10:26:57 compute-0 gifted_pare[163149]: {
Dec 04 10:26:57 compute-0 gifted_pare[163149]:     "0": [
Dec 04 10:26:57 compute-0 gifted_pare[163149]:         {
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "devices": [
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "/dev/loop3"
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             ],
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_name": "ceph_lv0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_size": "21470642176",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "name": "ceph_lv0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "tags": {
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.cluster_name": "ceph",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.crush_device_class": "",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.encrypted": "0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.objectstore": "bluestore",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.osd_id": "0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.type": "block",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.vdo": "0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.with_tpm": "0"
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             },
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "type": "block",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "vg_name": "ceph_vg0"
Dec 04 10:26:57 compute-0 gifted_pare[163149]:         }
Dec 04 10:26:57 compute-0 gifted_pare[163149]:     ],
Dec 04 10:26:57 compute-0 gifted_pare[163149]:     "1": [
Dec 04 10:26:57 compute-0 gifted_pare[163149]:         {
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "devices": [
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "/dev/loop4"
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             ],
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_name": "ceph_lv1",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_size": "21470642176",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "name": "ceph_lv1",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "tags": {
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.cluster_name": "ceph",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.crush_device_class": "",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.encrypted": "0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.objectstore": "bluestore",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.osd_id": "1",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.type": "block",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.vdo": "0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.with_tpm": "0"
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             },
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "type": "block",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "vg_name": "ceph_vg1"
Dec 04 10:26:57 compute-0 gifted_pare[163149]:         }
Dec 04 10:26:57 compute-0 gifted_pare[163149]:     ],
Dec 04 10:26:57 compute-0 gifted_pare[163149]:     "2": [
Dec 04 10:26:57 compute-0 gifted_pare[163149]:         {
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "devices": [
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "/dev/loop5"
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             ],
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_name": "ceph_lv2",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_size": "21470642176",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "name": "ceph_lv2",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "tags": {
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.cluster_name": "ceph",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.crush_device_class": "",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.encrypted": "0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.objectstore": "bluestore",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.osd_id": "2",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.type": "block",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.vdo": "0",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:                 "ceph.with_tpm": "0"
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             },
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "type": "block",
Dec 04 10:26:57 compute-0 gifted_pare[163149]:             "vg_name": "ceph_vg2"
Dec 04 10:26:57 compute-0 gifted_pare[163149]:         }
Dec 04 10:26:57 compute-0 gifted_pare[163149]:     ]
Dec 04 10:26:57 compute-0 gifted_pare[163149]: }
Dec 04 10:26:57 compute-0 systemd[1]: libpod-b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73.scope: Deactivated successfully.
Dec 04 10:26:57 compute-0 podman[163130]: 2025-12-04 10:26:57.060515601 +0000 UTC m=+0.443381933 container died b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb-merged.mount: Deactivated successfully.
Dec 04 10:26:57 compute-0 podman[163130]: 2025-12-04 10:26:57.103054036 +0000 UTC m=+0.485920348 container remove b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:26:57 compute-0 systemd[1]: libpod-conmon-b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73.scope: Deactivated successfully.
Dec 04 10:26:57 compute-0 sudo[163027]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:57 compute-0 sudo[163187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:26:57 compute-0 sudo[163187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:26:57 compute-0 sudo[163187]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:57 compute-0 sudo[163215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:26:57 compute-0 sudo[163215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:26:57 compute-0 sshd-session[163056]: Invalid user opc from 103.149.86.230 port 38274
Dec 04 10:26:57 compute-0 podman[163258]: 2025-12-04 10:26:57.57700055 +0000 UTC m=+0.063415619 container create f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:26:57 compute-0 systemd[1]: Started libpod-conmon-f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3.scope.
Dec 04 10:26:57 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:26:57 compute-0 podman[163258]: 2025-12-04 10:26:57.550223778 +0000 UTC m=+0.036638867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:26:57 compute-0 podman[163258]: 2025-12-04 10:26:57.654961012 +0000 UTC m=+0.141376071 container init f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:26:57 compute-0 podman[163258]: 2025-12-04 10:26:57.66335449 +0000 UTC m=+0.149769539 container start f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:26:57 compute-0 compassionate_taussig[163278]: 167 167
Dec 04 10:26:57 compute-0 podman[163258]: 2025-12-04 10:26:57.66677616 +0000 UTC m=+0.153191239 container attach f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:26:57 compute-0 systemd[1]: libpod-f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3.scope: Deactivated successfully.
Dec 04 10:26:57 compute-0 podman[163258]: 2025-12-04 10:26:57.66803468 +0000 UTC m=+0.154449739 container died f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 04 10:26:57 compute-0 sshd-session[163056]: Received disconnect from 103.149.86.230 port 38274:11: Bye Bye [preauth]
Dec 04 10:26:57 compute-0 sshd-session[163056]: Disconnected from invalid user opc 103.149.86.230 port 38274 [preauth]
Dec 04 10:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cae07ce66045a26bfe2f671c6cfb3790e9836dd99e5b16a5b373f496bba80a9-merged.mount: Deactivated successfully.
Dec 04 10:26:57 compute-0 podman[163258]: 2025-12-04 10:26:57.716380582 +0000 UTC m=+0.202795651 container remove f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 04 10:26:57 compute-0 systemd[1]: libpod-conmon-f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3.scope: Deactivated successfully.
Dec 04 10:26:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:26:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:26:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:26:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:26:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:26:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:26:57 compute-0 podman[163309]: 2025-12-04 10:26:57.937902004 +0000 UTC m=+0.075464903 container create b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 04 10:26:57 compute-0 systemd[1]: Started libpod-conmon-b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426.scope.
Dec 04 10:26:58 compute-0 podman[163309]: 2025-12-04 10:26:57.909602626 +0000 UTC m=+0.047165455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:26:58 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:26:58 compute-0 podman[163309]: 2025-12-04 10:26:58.037932958 +0000 UTC m=+0.175495847 container init b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 04 10:26:58 compute-0 ceph-mon[75358]: pgmap v483: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Dec 04 10:26:58 compute-0 podman[163309]: 2025-12-04 10:26:58.052363808 +0000 UTC m=+0.189926597 container start b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:26:58 compute-0 podman[163309]: 2025-12-04 10:26:58.056638009 +0000 UTC m=+0.194200848 container attach b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:26:58 compute-0 lvm[163436]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:26:58 compute-0 lvm[163437]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:26:58 compute-0 lvm[163440]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:26:58 compute-0 lvm[163440]: VG ceph_vg2 finished
Dec 04 10:26:58 compute-0 lvm[163437]: VG ceph_vg1 finished
Dec 04 10:26:58 compute-0 lvm[163436]: VG ceph_vg0 finished
Dec 04 10:26:58 compute-0 tender_wing[163329]: {}
Dec 04 10:26:58 compute-0 systemd[1]: libpod-b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426.scope: Deactivated successfully.
Dec 04 10:26:58 compute-0 systemd[1]: libpod-b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426.scope: Consumed 1.351s CPU time.
Dec 04 10:26:58 compute-0 podman[163309]: 2025-12-04 10:26:58.940328881 +0000 UTC m=+1.077891670 container died b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Dec 04 10:26:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4-merged.mount: Deactivated successfully.
Dec 04 10:26:59 compute-0 podman[163309]: 2025-12-04 10:26:59.058273406 +0000 UTC m=+1.195836195 container remove b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:26:59 compute-0 systemd[1]: libpod-conmon-b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426.scope: Deactivated successfully.
Dec 04 10:26:59 compute-0 sudo[163215]: pam_unix(sudo:session): session closed for user root
Dec 04 10:26:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:26:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:26:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:26:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:26:59 compute-0 sudo[163463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:26:59 compute-0 sudo[163463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:26:59 compute-0 sudo[163463]: pam_unix(sudo:session): session closed for user root
Dec 04 10:27:00 compute-0 ceph-mon[75358]: pgmap v484: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:27:00 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:27:00 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:27:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:27:01 compute-0 ceph-mon[75358]: pgmap v485: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:27:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:27:04 compute-0 ceph-mon[75358]: pgmap v486: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:27:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:27:06 compute-0 ceph-mon[75358]: pgmap v487: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:27:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Dec 04 10:27:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Dec 04 10:27:09 compute-0 ceph-mon[75358]: pgmap v488: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Dec 04 10:27:10 compute-0 ceph-mon[75358]: pgmap v489: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Dec 04 10:27:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:11 compute-0 ceph-mon[75358]: pgmap v490: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:14 compute-0 ceph-mon[75358]: pgmap v491: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:18 compute-0 ceph-mon[75358]: pgmap v492: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:18 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Dec 04 10:27:18 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 10:27:18 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 04 10:27:18 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 10:27:18 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 04 10:27:18 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 10:27:18 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 10:27:18 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 10:27:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:19 compute-0 ceph-mon[75358]: pgmap v493: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:19 compute-0 ceph-mon[75358]: pgmap v494: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:21 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 04 10:27:22 compute-0 podman[163503]: 2025-12-04 10:27:22.012853795 +0000 UTC m=+0.106343337 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 04 10:27:22 compute-0 ceph-mon[75358]: pgmap v495: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:23 compute-0 podman[163529]: 2025-12-04 10:27:23.96103632 +0000 UTC m=+0.060614530 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 04 10:27:24 compute-0 ceph-mon[75358]: pgmap v496: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:26 compute-0 ceph-mon[75358]: pgmap v497: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:27:26
Dec 04 10:27:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:27:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:27:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta']
Dec 04 10:27:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:27:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:27 compute-0 ceph-mon[75358]: pgmap v498: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:27:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:27:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:27:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:27:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:27:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:27:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:27:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:29 compute-0 ceph-mon[75358]: pgmap v499: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:30 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Dec 04 10:27:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 10:27:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 04 10:27:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 10:27:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 04 10:27:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 10:27:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 10:27:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 10:27:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:32 compute-0 ceph-mon[75358]: pgmap v500: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:32 compute-0 sshd-session[163555]: Invalid user work from 103.179.218.243 port 42008
Dec 04 10:27:32 compute-0 sshd-session[163555]: Received disconnect from 103.179.218.243 port 42008:11: Bye Bye [preauth]
Dec 04 10:27:32 compute-0 sshd-session[163555]: Disconnected from invalid user work 103.179.218.243 port 42008 [preauth]
Dec 04 10:27:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:33 compute-0 sshd-session[163557]: Invalid user server from 107.175.213.239 port 38118
Dec 04 10:27:33 compute-0 sshd-session[163557]: Received disconnect from 107.175.213.239 port 38118:11: Bye Bye [preauth]
Dec 04 10:27:33 compute-0 sshd-session[163557]: Disconnected from invalid user server 107.175.213.239 port 38118 [preauth]
Dec 04 10:27:34 compute-0 ceph-mon[75358]: pgmap v501: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:36 compute-0 ceph-mon[75358]: pgmap v502: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:27:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:27:38 compute-0 ceph-mon[75358]: pgmap v503: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:40 compute-0 ceph-mon[75358]: pgmap v504: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:41 compute-0 ceph-mon[75358]: pgmap v505: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:44 compute-0 ceph-mon[75358]: pgmap v506: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:46 compute-0 ceph-mon[75358]: pgmap v507: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:48 compute-0 ceph-mon[75358]: pgmap v508: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:50 compute-0 ceph-mon[75358]: pgmap v509: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:52 compute-0 ceph-mon[75358]: pgmap v510: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:52 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 04 10:27:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:52 compute-0 podman[170401]: 2025-12-04 10:27:52.994022458 +0000 UTC m=+0.091210587 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 04 10:27:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:53 compute-0 ceph-mon[75358]: pgmap v511: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:27:54.892 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:27:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:27:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:27:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:27:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:27:54 compute-0 podman[171844]: 2025-12-04 10:27:54.966054424 +0000 UTC m=+0.075382941 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:27:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:56 compute-0 ceph-mon[75358]: pgmap v512: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:57 compute-0 ceph-mon[75358]: pgmap v513: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:27:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:27:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:27:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:27:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:27:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:27:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:27:58 compute-0 sshd-session[173997]: Invalid user zimbra from 217.154.62.22 port 44130
Dec 04 10:27:58 compute-0 sshd-session[173997]: Received disconnect from 217.154.62.22 port 44130:11: Bye Bye [preauth]
Dec 04 10:27:58 compute-0 sshd-session[173997]: Disconnected from invalid user zimbra 217.154.62.22 port 44130 [preauth]
Dec 04 10:27:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:59 compute-0 ceph-mon[75358]: pgmap v514: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:27:59 compute-0 sudo[174842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:27:59 compute-0 sudo[174842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:27:59 compute-0 sudo[174842]: pam_unix(sudo:session): session closed for user root
Dec 04 10:27:59 compute-0 sudo[174917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:27:59 compute-0 sudo[174917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:00 compute-0 podman[175323]: 2025-12-04 10:28:00.16239167 +0000 UTC m=+0.341465788 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:28:00 compute-0 podman[175659]: 2025-12-04 10:28:00.348372656 +0000 UTC m=+0.080325427 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:28:00 compute-0 podman[175323]: 2025-12-04 10:28:00.355599372 +0000 UTC m=+0.534673490 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 04 10:28:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:01 compute-0 sudo[174917]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:28:01 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:28:01 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:01 compute-0 sudo[176419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:28:01 compute-0 sudo[176419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:01 compute-0 sudo[176419]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:01 compute-0 sudo[176484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:28:01 compute-0 sudo[176484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:01 compute-0 sudo[176484]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:28:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:28:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:28:01 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:28:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:28:01 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:28:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:28:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:28:01 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:28:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:28:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:28:01 compute-0 sudo[176942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:28:01 compute-0 sudo[176942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:01 compute-0 sudo[176942]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:01 compute-0 sudo[177011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:28:01 compute-0 sudo[177011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:02 compute-0 ceph-mon[75358]: pgmap v515: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:28:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:28:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:28:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:28:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:28:02 compute-0 podman[177237]: 2025-12-04 10:28:02.157762955 +0000 UTC m=+0.048582103 container create a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:28:02 compute-0 systemd[1]: Started libpod-conmon-a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70.scope.
Dec 04 10:28:02 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:28:02 compute-0 podman[177237]: 2025-12-04 10:28:02.139604476 +0000 UTC m=+0.030423624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:28:02 compute-0 podman[177237]: 2025-12-04 10:28:02.247405676 +0000 UTC m=+0.138224894 container init a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:28:02 compute-0 podman[177237]: 2025-12-04 10:28:02.256178118 +0000 UTC m=+0.146997256 container start a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:28:02 compute-0 gifted_shtern[177326]: 167 167
Dec 04 10:28:02 compute-0 systemd[1]: libpod-a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70.scope: Deactivated successfully.
Dec 04 10:28:02 compute-0 podman[177237]: 2025-12-04 10:28:02.261029731 +0000 UTC m=+0.151848869 container attach a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:28:02 compute-0 podman[177237]: 2025-12-04 10:28:02.261733656 +0000 UTC m=+0.152552834 container died a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e8c71c9277bad14dbfd95852d5d39af2013f260a06279a07eb6360bcce488ed-merged.mount: Deactivated successfully.
Dec 04 10:28:02 compute-0 podman[177237]: 2025-12-04 10:28:02.312777046 +0000 UTC m=+0.203596224 container remove a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:28:02 compute-0 systemd[1]: libpod-conmon-a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70.scope: Deactivated successfully.
Dec 04 10:28:02 compute-0 podman[177500]: 2025-12-04 10:28:02.504177456 +0000 UTC m=+0.050183610 container create 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:28:02 compute-0 systemd[1]: Started libpod-conmon-4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e.scope.
Dec 04 10:28:02 compute-0 podman[177500]: 2025-12-04 10:28:02.481138074 +0000 UTC m=+0.027144218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:28:02 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:02 compute-0 podman[177500]: 2025-12-04 10:28:02.597801538 +0000 UTC m=+0.143807662 container init 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:28:02 compute-0 podman[177500]: 2025-12-04 10:28:02.607639045 +0000 UTC m=+0.153645169 container start 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:28:02 compute-0 podman[177500]: 2025-12-04 10:28:02.611647498 +0000 UTC m=+0.157653702 container attach 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 04 10:28:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:03 compute-0 dreamy_lumiere[177584]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:28:03 compute-0 dreamy_lumiere[177584]: --> All data devices are unavailable
Dec 04 10:28:03 compute-0 systemd[1]: libpod-4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e.scope: Deactivated successfully.
Dec 04 10:28:03 compute-0 podman[177500]: 2025-12-04 10:28:03.132559409 +0000 UTC m=+0.678565533 container died 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8-merged.mount: Deactivated successfully.
Dec 04 10:28:03 compute-0 podman[177500]: 2025-12-04 10:28:03.178086481 +0000 UTC m=+0.724092615 container remove 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:28:03 compute-0 systemd[1]: libpod-conmon-4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e.scope: Deactivated successfully.
Dec 04 10:28:03 compute-0 sudo[177011]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:03 compute-0 sudo[178091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:28:03 compute-0 sudo[178091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:03 compute-0 sudo[178091]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:03 compute-0 sudo[178156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:28:03 compute-0 sudo[178156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:03 compute-0 podman[178380]: 2025-12-04 10:28:03.66211817 +0000 UTC m=+0.044598982 container create 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 10:28:03 compute-0 systemd[1]: Started libpod-conmon-9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf.scope.
Dec 04 10:28:03 compute-0 podman[178380]: 2025-12-04 10:28:03.642964938 +0000 UTC m=+0.025445770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:28:03 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:28:03 compute-0 podman[178380]: 2025-12-04 10:28:03.761309391 +0000 UTC m=+0.143790223 container init 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:28:03 compute-0 podman[178380]: 2025-12-04 10:28:03.768334643 +0000 UTC m=+0.150815455 container start 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 04 10:28:03 compute-0 podman[178380]: 2025-12-04 10:28:03.771601918 +0000 UTC m=+0.154082750 container attach 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:28:03 compute-0 sad_varahamihira[178465]: 167 167
Dec 04 10:28:03 compute-0 systemd[1]: libpod-9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf.scope: Deactivated successfully.
Dec 04 10:28:03 compute-0 podman[178380]: 2025-12-04 10:28:03.773658236 +0000 UTC m=+0.156139048 container died 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1406b06520a36e1bd0b66c84a7c729a300af72ab8ee1314aed4b43a34c2e2f35-merged.mount: Deactivated successfully.
Dec 04 10:28:03 compute-0 podman[178380]: 2025-12-04 10:28:03.818836389 +0000 UTC m=+0.201317191 container remove 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:28:03 compute-0 systemd[1]: libpod-conmon-9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf.scope: Deactivated successfully.
Dec 04 10:28:04 compute-0 podman[178638]: 2025-12-04 10:28:04.000403213 +0000 UTC m=+0.046074165 container create 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 04 10:28:04 compute-0 systemd[1]: Started libpod-conmon-874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b.scope.
Dec 04 10:28:04 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:04 compute-0 podman[178638]: 2025-12-04 10:28:04.073337857 +0000 UTC m=+0.119008819 container init 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:28:04 compute-0 podman[178638]: 2025-12-04 10:28:03.981834395 +0000 UTC m=+0.027505367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:28:04 compute-0 podman[178638]: 2025-12-04 10:28:04.079041279 +0000 UTC m=+0.124712221 container start 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:28:04 compute-0 podman[178638]: 2025-12-04 10:28:04.081992927 +0000 UTC m=+0.127664009 container attach 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:28:04 compute-0 ceph-mon[75358]: pgmap v516: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:04 compute-0 awesome_diffie[178714]: {
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:     "0": [
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:         {
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "devices": [
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "/dev/loop3"
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             ],
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_name": "ceph_lv0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_size": "21470642176",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "name": "ceph_lv0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "tags": {
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.cluster_name": "ceph",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.crush_device_class": "",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.encrypted": "0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.objectstore": "bluestore",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.osd_id": "0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.type": "block",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.vdo": "0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.with_tpm": "0"
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             },
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "type": "block",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "vg_name": "ceph_vg0"
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:         }
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:     ],
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:     "1": [
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:         {
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "devices": [
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "/dev/loop4"
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             ],
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_name": "ceph_lv1",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_size": "21470642176",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "name": "ceph_lv1",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "tags": {
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.cluster_name": "ceph",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.crush_device_class": "",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.encrypted": "0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.objectstore": "bluestore",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.osd_id": "1",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.type": "block",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.vdo": "0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.with_tpm": "0"
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             },
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "type": "block",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "vg_name": "ceph_vg1"
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:         }
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:     ],
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:     "2": [
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:         {
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "devices": [
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "/dev/loop5"
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             ],
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_name": "ceph_lv2",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_size": "21470642176",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "name": "ceph_lv2",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "tags": {
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.cluster_name": "ceph",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.crush_device_class": "",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.encrypted": "0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.objectstore": "bluestore",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.osd_id": "2",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.type": "block",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.vdo": "0",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:                 "ceph.with_tpm": "0"
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             },
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "type": "block",
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:             "vg_name": "ceph_vg2"
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:         }
Dec 04 10:28:04 compute-0 awesome_diffie[178714]:     ]
Dec 04 10:28:04 compute-0 awesome_diffie[178714]: }
Dec 04 10:28:04 compute-0 systemd[1]: libpod-874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b.scope: Deactivated successfully.
Dec 04 10:28:04 compute-0 podman[178638]: 2025-12-04 10:28:04.374930373 +0000 UTC m=+0.420601325 container died 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112-merged.mount: Deactivated successfully.
Dec 04 10:28:04 compute-0 podman[178638]: 2025-12-04 10:28:04.418265434 +0000 UTC m=+0.463936386 container remove 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:28:04 compute-0 systemd[1]: libpod-conmon-874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b.scope: Deactivated successfully.
Dec 04 10:28:04 compute-0 sudo[178156]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:04 compute-0 sudo[179058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:28:04 compute-0 sudo[179058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:04 compute-0 sudo[179058]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:04 compute-0 sudo[179127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:28:04 compute-0 sudo[179127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:04 compute-0 podman[179366]: 2025-12-04 10:28:04.895561398 +0000 UTC m=+0.048085342 container create 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:28:04 compute-0 systemd[1]: Started libpod-conmon-1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701.scope.
Dec 04 10:28:04 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:28:04 compute-0 podman[179366]: 2025-12-04 10:28:04.872473325 +0000 UTC m=+0.024997279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:28:04 compute-0 podman[179366]: 2025-12-04 10:28:04.983524329 +0000 UTC m=+0.136048283 container init 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:28:04 compute-0 podman[179366]: 2025-12-04 10:28:04.991426272 +0000 UTC m=+0.143950216 container start 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:28:04 compute-0 podman[179366]: 2025-12-04 10:28:04.994642466 +0000 UTC m=+0.147166410 container attach 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 04 10:28:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:04 compute-0 gifted_liskov[179438]: 167 167
Dec 04 10:28:04 compute-0 systemd[1]: libpod-1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701.scope: Deactivated successfully.
Dec 04 10:28:04 compute-0 podman[179366]: 2025-12-04 10:28:04.99739264 +0000 UTC m=+0.149916574 container died 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 04 10:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8ac52727d26336cff703553edf47760419b5db827753a317d29abf90bf64196-merged.mount: Deactivated successfully.
Dec 04 10:28:05 compute-0 podman[179366]: 2025-12-04 10:28:05.03074511 +0000 UTC m=+0.183269054 container remove 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:28:05 compute-0 systemd[1]: libpod-conmon-1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701.scope: Deactivated successfully.
Dec 04 10:28:05 compute-0 podman[179599]: 2025-12-04 10:28:05.188236937 +0000 UTC m=+0.039734228 container create e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:28:05 compute-0 systemd[1]: Started libpod-conmon-e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba.scope.
Dec 04 10:28:05 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:28:05 compute-0 podman[179599]: 2025-12-04 10:28:05.171226964 +0000 UTC m=+0.022724275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:28:05 compute-0 podman[179599]: 2025-12-04 10:28:05.272589826 +0000 UTC m=+0.124087137 container init e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:28:05 compute-0 podman[179599]: 2025-12-04 10:28:05.278778839 +0000 UTC m=+0.130276130 container start e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:28:05 compute-0 podman[179599]: 2025-12-04 10:28:05.282066024 +0000 UTC m=+0.133563325 container attach e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:28:05 compute-0 lvm[180285]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:28:05 compute-0 lvm[180285]: VG ceph_vg0 finished
Dec 04 10:28:06 compute-0 lvm[180296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:28:06 compute-0 lvm[180296]: VG ceph_vg2 finished
Dec 04 10:28:06 compute-0 lvm[180288]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:28:06 compute-0 lvm[180288]: VG ceph_vg1 finished
Dec 04 10:28:06 compute-0 admiring_noyce[179673]: {}
Dec 04 10:28:06 compute-0 systemd[1]: libpod-e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba.scope: Deactivated successfully.
Dec 04 10:28:06 compute-0 systemd[1]: libpod-e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba.scope: Consumed 1.328s CPU time.
Dec 04 10:28:06 compute-0 podman[179599]: 2025-12-04 10:28:06.128909663 +0000 UTC m=+0.980406974 container died e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:28:06 compute-0 ceph-mon[75358]: pgmap v517: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9-merged.mount: Deactivated successfully.
Dec 04 10:28:06 compute-0 podman[179599]: 2025-12-04 10:28:06.311359217 +0000 UTC m=+1.162856528 container remove e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:28:06 compute-0 systemd[1]: libpod-conmon-e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba.scope: Deactivated successfully.
Dec 04 10:28:06 compute-0 sudo[179127]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:28:06 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:28:06 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:06 compute-0 sudo[180569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:28:06 compute-0 sudo[180569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:28:06 compute-0 sudo[180569]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:28:07 compute-0 ceph-mon[75358]: pgmap v518: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:10 compute-0 ceph-mon[75358]: pgmap v519: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:11 compute-0 ceph-mon[75358]: pgmap v520: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:11 compute-0 sshd-session[181310]: Invalid user admin1234 from 74.249.218.27 port 41974
Dec 04 10:28:11 compute-0 sshd-session[181310]: Received disconnect from 74.249.218.27 port 41974:11: Bye Bye [preauth]
Dec 04 10:28:11 compute-0 sshd-session[181310]: Disconnected from invalid user admin1234 74.249.218.27 port 41974 [preauth]
Dec 04 10:28:11 compute-0 sshd-session[181307]: Invalid user ubuntu from 103.149.86.230 port 50982
Dec 04 10:28:11 compute-0 sshd-session[181307]: Received disconnect from 103.149.86.230 port 50982:11: Bye Bye [preauth]
Dec 04 10:28:11 compute-0 sshd-session[181307]: Disconnected from invalid user ubuntu 103.149.86.230 port 50982 [preauth]
Dec 04 10:28:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:14 compute-0 ceph-mon[75358]: pgmap v521: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:16 compute-0 ceph-mon[75358]: pgmap v522: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:17 compute-0 ceph-mon[75358]: pgmap v523: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:20 compute-0 ceph-mon[75358]: pgmap v524: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:21 compute-0 ceph-mon[75358]: pgmap v525: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:21 compute-0 kernel: SELinux:  Converting 2771 SID table entries...
Dec 04 10:28:21 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 04 10:28:21 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 04 10:28:21 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 04 10:28:21 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 04 10:28:21 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 04 10:28:21 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 04 10:28:21 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 04 10:28:22 compute-0 groupadd[181326]: group added to /etc/group: name=dnsmasq, GID=991
Dec 04 10:28:22 compute-0 groupadd[181326]: group added to /etc/gshadow: name=dnsmasq
Dec 04 10:28:22 compute-0 groupadd[181326]: new group: name=dnsmasq, GID=991
Dec 04 10:28:22 compute-0 useradd[181333]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 04 10:28:22 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 04 10:28:22 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 04 10:28:22 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec 04 10:28:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:23 compute-0 podman[181343]: 2025-12-04 10:28:23.129466777 +0000 UTC m=+0.091832764 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 04 10:28:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:24 compute-0 groupadd[181371]: group added to /etc/group: name=clevis, GID=990
Dec 04 10:28:24 compute-0 groupadd[181371]: group added to /etc/gshadow: name=clevis
Dec 04 10:28:24 compute-0 ceph-mon[75358]: pgmap v526: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:24 compute-0 groupadd[181371]: new group: name=clevis, GID=990
Dec 04 10:28:24 compute-0 useradd[181378]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 04 10:28:24 compute-0 usermod[181388]: add 'clevis' to group 'tss'
Dec 04 10:28:24 compute-0 usermod[181388]: add 'clevis' to shadow group 'tss'
Dec 04 10:28:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:25 compute-0 podman[181406]: 2025-12-04 10:28:25.092792776 +0000 UTC m=+0.074309176 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 04 10:28:26 compute-0 ceph-mon[75358]: pgmap v527: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:26 compute-0 polkitd[43629]: Reloading rules
Dec 04 10:28:26 compute-0 polkitd[43629]: Collecting garbage unconditionally...
Dec 04 10:28:26 compute-0 polkitd[43629]: Loading rules from directory /etc/polkit-1/rules.d
Dec 04 10:28:26 compute-0 polkitd[43629]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 04 10:28:26 compute-0 polkitd[43629]: Finished loading, compiling and executing 3 rules
Dec 04 10:28:26 compute-0 polkitd[43629]: Reloading rules
Dec 04 10:28:26 compute-0 polkitd[43629]: Collecting garbage unconditionally...
Dec 04 10:28:26 compute-0 polkitd[43629]: Loading rules from directory /etc/polkit-1/rules.d
Dec 04 10:28:26 compute-0 polkitd[43629]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 04 10:28:26 compute-0 polkitd[43629]: Finished loading, compiling and executing 3 rules
Dec 04 10:28:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:28:26
Dec 04 10:28:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:28:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:28:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.data']
Dec 04 10:28:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:28:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:28:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:28:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:28:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:28:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:28:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:28:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:28 compute-0 ceph-mon[75358]: pgmap v528: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:29 compute-0 ceph-mon[75358]: pgmap v529: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.592071) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109592262, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3573302, "memory_usage": 3627680, "flush_reason": "Manual Compaction"}
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109622089, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3486142, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9754, "largest_seqno": 11793, "table_properties": {"data_size": 3476863, "index_size": 5901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17795, "raw_average_key_size": 19, "raw_value_size": 3458498, "raw_average_value_size": 3779, "num_data_blocks": 267, "num_entries": 915, "num_filter_entries": 915, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843875, "oldest_key_time": 1764843875, "file_creation_time": 1764844109, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 30126 microseconds, and 10417 cpu microseconds.
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.622234) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3486142 bytes OK
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.622283) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.624024) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.624053) EVENT_LOG_v1 {"time_micros": 1764844109624045, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.624087) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3564795, prev total WAL file size 3564795, number of live WAL files 2.
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.625848) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3404KB)], [26(6084KB)]
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109625955, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9716794, "oldest_snapshot_seqno": -1}
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3747 keys, 8056179 bytes, temperature: kUnknown
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109677396, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8056179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8027637, "index_size": 18064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90062, "raw_average_key_size": 24, "raw_value_size": 7956529, "raw_average_value_size": 2123, "num_data_blocks": 782, "num_entries": 3747, "num_filter_entries": 3747, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844109, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.677691) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8056179 bytes
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.679324) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.6 rd, 156.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4261, records dropped: 514 output_compression: NoCompression
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.679343) EVENT_LOG_v1 {"time_micros": 1764844109679332, "job": 10, "event": "compaction_finished", "compaction_time_micros": 51524, "compaction_time_cpu_micros": 17495, "output_level": 6, "num_output_files": 1, "total_output_size": 8056179, "num_input_records": 4261, "num_output_records": 3747, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109679983, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109680896, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.625683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:28:29 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:28:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:32 compute-0 ceph-mon[75358]: pgmap v530: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:32 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 04 10:28:32 compute-0 sshd[1008]: Received signal 15; terminating.
Dec 04 10:28:32 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 04 10:28:32 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 04 10:28:32 compute-0 systemd[1]: sshd.service: Consumed 9.385s CPU time, read 32.0K from disk, written 224.0K to disk.
Dec 04 10:28:32 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 04 10:28:32 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 04 10:28:32 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 04 10:28:32 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 04 10:28:32 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 04 10:28:32 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 04 10:28:32 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 04 10:28:32 compute-0 sshd[182213]: Server listening on 0.0.0.0 port 22.
Dec 04 10:28:32 compute-0 sshd[182213]: Server listening on :: port 22.
Dec 04 10:28:32 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 04 10:28:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 04 10:28:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 04 10:28:34 compute-0 systemd[1]: Reloading.
Dec 04 10:28:34 compute-0 ceph-mon[75358]: pgmap v531: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:34 compute-0 systemd-rc-local-generator[182471]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:34 compute-0 systemd-sysv-generator[182474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 04 10:28:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:36 compute-0 ceph-mon[75358]: pgmap v532: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:28:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:28:37 compute-0 sudo[162636]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:37 compute-0 ceph-mon[75358]: pgmap v533: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:38 compute-0 sudo[186938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqsoojufxvzvmgnabpzvylzrfuukghz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844117.9205737-336-244321612334480/AnsiballZ_systemd.py'
Dec 04 10:28:38 compute-0 sudo[186938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:38 compute-0 python3.9[186957]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 04 10:28:38 compute-0 systemd[1]: Reloading.
Dec 04 10:28:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:39 compute-0 systemd-rc-local-generator[187465]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:39 compute-0 systemd-sysv-generator[187469]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:39 compute-0 sudo[186938]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:39 compute-0 sudo[188216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maxndcyftheraebcpyzzvvazritkfnrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844119.488179-336-175166061024364/AnsiballZ_systemd.py'
Dec 04 10:28:39 compute-0 sudo[188216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:40 compute-0 ceph-mon[75358]: pgmap v534: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:40 compute-0 python3.9[188270]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 04 10:28:40 compute-0 systemd[1]: Reloading.
Dec 04 10:28:40 compute-0 systemd-rc-local-generator[188699]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:40 compute-0 systemd-sysv-generator[188703]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:40 compute-0 sudo[188216]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:40 compute-0 sudo[189423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djssmdmyicbrgbxjzpghgzenostyiwmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844120.620754-336-151270865952421/AnsiballZ_systemd.py'
Dec 04 10:28:40 compute-0 sudo[189423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:41 compute-0 python3.9[189445]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 04 10:28:41 compute-0 systemd[1]: Reloading.
Dec 04 10:28:41 compute-0 systemd-rc-local-generator[189964]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:41 compute-0 systemd-sysv-generator[189968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:41 compute-0 sudo[189423]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:42 compute-0 sudo[190751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htaoxpqbikthhofnahbcfmbgovlotnys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844121.7761922-336-75752144097370/AnsiballZ_systemd.py'
Dec 04 10:28:42 compute-0 sudo[190751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:42 compute-0 ceph-mon[75358]: pgmap v535: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:42 compute-0 python3.9[190768]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 04 10:28:42 compute-0 systemd[1]: Reloading.
Dec 04 10:28:42 compute-0 systemd-sysv-generator[191063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:42 compute-0 systemd-rc-local-generator[191060]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:42 compute-0 sudo[190751]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:43 compute-0 sudo[191694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhptituydtmhcjndpgfdfauxguczsibq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844122.9574564-365-192015838959896/AnsiballZ_systemd.py'
Dec 04 10:28:43 compute-0 sudo[191694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:43 compute-0 ceph-mon[75358]: pgmap v536: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:43 compute-0 python3.9[191696]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 04 10:28:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 04 10:28:43 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.440s CPU time.
Dec 04 10:28:43 compute-0 systemd[1]: run-r7e894d4c01a54f8786903b4e3ac50d4e.service: Deactivated successfully.
Dec 04 10:28:43 compute-0 systemd[1]: Reloading.
Dec 04 10:28:43 compute-0 systemd-sysv-generator[191818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:43 compute-0 systemd-rc-local-generator[191812]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:44 compute-0 sudo[191694]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:44 compute-0 sudo[191973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cicqssutrajiituotjwlsbghjclahgxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844124.2970104-365-59826326477290/AnsiballZ_systemd.py'
Dec 04 10:28:44 compute-0 sudo[191973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:44 compute-0 python3.9[191975]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:44 compute-0 systemd[1]: Reloading.
Dec 04 10:28:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:45 compute-0 systemd-rc-local-generator[192006]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:45 compute-0 systemd-sysv-generator[192009]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:45 compute-0 sudo[191973]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:45 compute-0 sudo[192163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpkaireexyuiapioknevtiqjugwplmra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844125.483216-365-64510919444541/AnsiballZ_systemd.py'
Dec 04 10:28:45 compute-0 sudo[192163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:46 compute-0 python3.9[192165]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:46 compute-0 ceph-mon[75358]: pgmap v537: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:46 compute-0 systemd[1]: Reloading.
Dec 04 10:28:46 compute-0 systemd-rc-local-generator[192195]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:46 compute-0 systemd-sysv-generator[192199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:46 compute-0 sudo[192163]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:46 compute-0 sudo[192353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avnsucjwakrfranipaihxoaekwhueeku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844126.5510726-365-28846670996641/AnsiballZ_systemd.py'
Dec 04 10:28:46 compute-0 sudo[192353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:47 compute-0 python3.9[192355]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:47 compute-0 ceph-mon[75358]: pgmap v538: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:48 compute-0 sudo[192353]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:48 compute-0 sudo[192508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dayyggvwhntofghuvqizsqkhxdfcvaxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844128.3206873-365-111340901265542/AnsiballZ_systemd.py'
Dec 04 10:28:48 compute-0 sudo[192508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:48 compute-0 python3.9[192510]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:48 compute-0 systemd[1]: Reloading.
Dec 04 10:28:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:49 compute-0 systemd-sysv-generator[192541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:49 compute-0 systemd-rc-local-generator[192538]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:49 compute-0 sudo[192508]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:49 compute-0 sudo[192698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwezzqruedpsxuwfonwavbrqsgvofnuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844129.6971402-401-126573652427299/AnsiballZ_systemd.py'
Dec 04 10:28:49 compute-0 sudo[192698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:50 compute-0 python3.9[192700]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 04 10:28:50 compute-0 systemd[1]: Reloading.
Dec 04 10:28:50 compute-0 ceph-mon[75358]: pgmap v539: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:50 compute-0 systemd-rc-local-generator[192731]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:28:50 compute-0 systemd-sysv-generator[192735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:28:50 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 04 10:28:50 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 04 10:28:50 compute-0 sudo[192698]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:51 compute-0 sudo[192891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouztrfecskvqceynpvtttabblfcijiyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844130.9399543-409-269893016286941/AnsiballZ_systemd.py'
Dec 04 10:28:51 compute-0 sudo[192891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:51 compute-0 python3.9[192893]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:51 compute-0 sudo[192891]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:51 compute-0 ceph-mon[75358]: pgmap v540: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:51 compute-0 sudo[193046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irxhkyqxwzfnjzyojmpdnnkfmrjergpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844131.710574-409-199065967815397/AnsiballZ_systemd.py'
Dec 04 10:28:51 compute-0 sudo[193046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:52 compute-0 python3.9[193048]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:52 compute-0 sudo[193046]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:52 compute-0 sudo[193201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmxkcebfrcstyszkjdoklxxiriliafta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844132.4586143-409-51621236981628/AnsiballZ_systemd.py'
Dec 04 10:28:52 compute-0 sudo[193201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:52 compute-0 python3.9[193203]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:53 compute-0 sudo[193201]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:53 compute-0 sudo[193357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsjmmdklacumirfggvfgcrrjkotxexrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844133.1973429-409-87332533641281/AnsiballZ_systemd.py'
Dec 04 10:28:53 compute-0 sudo[193357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:53 compute-0 podman[193330]: 2025-12-04 10:28:53.613139009 +0000 UTC m=+0.109814075 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:28:53 compute-0 python3.9[193359]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:53 compute-0 sudo[193357]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:54 compute-0 ceph-mon[75358]: pgmap v541: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:54 compute-0 sudo[193538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvhcnpfvcsjdjmchimvlpfvasdltoapv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844133.9825916-409-176918818968598/AnsiballZ_systemd.py'
Dec 04 10:28:54 compute-0 sudo[193538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:54 compute-0 python3.9[193540]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:54 compute-0 sudo[193538]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:28:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:28:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:28:54.894 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:28:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:28:54.894 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:28:55 compute-0 sudo[193693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgpkjrmtujoamhvliyykkgixauetybgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844134.7482548-409-274168383287657/AnsiballZ_systemd.py'
Dec 04 10:28:55 compute-0 sudo[193693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:55 compute-0 python3.9[193695]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:55 compute-0 ceph-mon[75358]: pgmap v542: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:55 compute-0 podman[193697]: 2025-12-04 10:28:55.380792725 +0000 UTC m=+0.048462538 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 04 10:28:55 compute-0 sudo[193693]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:55 compute-0 sudo[193867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nasxzllhjsdohtbqnwrpeboxhjpccdxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844135.4986517-409-207234398653954/AnsiballZ_systemd.py'
Dec 04 10:28:55 compute-0 sudo[193867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:56 compute-0 python3.9[193869]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:56 compute-0 sudo[193867]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:56 compute-0 sudo[194022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jschkdqzqkkhxcljtzfhsvsjvqwtobla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844136.3014925-409-267469646445971/AnsiballZ_systemd.py'
Dec 04 10:28:56 compute-0 sudo[194022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:56 compute-0 python3.9[194024]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:56 compute-0 sudo[194022]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:57 compute-0 ceph-mon[75358]: pgmap v543: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:57 compute-0 sudo[194177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayrhomsvbxmvximcreuxcvxypqkrjeqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844137.095351-409-181965336370539/AnsiballZ_systemd.py'
Dec 04 10:28:57 compute-0 sudo[194177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:57 compute-0 python3.9[194179]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:57 compute-0 sudo[194177]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:28:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:28:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:28:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:28:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:28:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:28:58 compute-0 sudo[194332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvuiqtqmojmyybejrqsxeigyggwolmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844137.8232393-409-214215697500133/AnsiballZ_systemd.py'
Dec 04 10:28:58 compute-0 sudo[194332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:58 compute-0 python3.9[194334]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:58 compute-0 sudo[194332]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:28:58 compute-0 sudo[194487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdpqdluxncomxbzyxmmvyhdtekxkcbqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844138.5911243-409-14779665341304/AnsiballZ_systemd.py'
Dec 04 10:28:58 compute-0 sudo[194487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:28:59 compute-0 python3.9[194489]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:59 compute-0 sudo[194487]: pam_unix(sudo:session): session closed for user root
Dec 04 10:28:59 compute-0 sudo[194642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-embsqkksyqwvjnfpgkuigmjckdykrfpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844139.3612611-409-242726864502725/AnsiballZ_systemd.py'
Dec 04 10:28:59 compute-0 sudo[194642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:28:59 compute-0 python3.9[194644]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:28:59 compute-0 sudo[194642]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:00 compute-0 ceph-mon[75358]: pgmap v544: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:00 compute-0 sudo[194797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufeltoiyyxocpshowtcpkujmeqjgnnat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844140.1152542-409-126660783985749/AnsiballZ_systemd.py'
Dec 04 10:29:00 compute-0 sudo[194797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:00 compute-0 python3.9[194799]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:29:00 compute-0 sudo[194797]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:01 compute-0 sudo[194952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zijzsfjipgnhshfhlsmimzjmftqyscmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844140.9106941-409-250035487640142/AnsiballZ_systemd.py'
Dec 04 10:29:01 compute-0 sudo[194952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:01 compute-0 python3.9[194954]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 04 10:29:01 compute-0 ceph-mon[75358]: pgmap v545: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:01 compute-0 sudo[194952]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:02 compute-0 sudo[195107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwocnjtqvxfoezyyewvacbisbdvmqiad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844141.8522892-511-6130482543264/AnsiballZ_file.py'
Dec 04 10:29:02 compute-0 sudo[195107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:02 compute-0 python3.9[195109]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:29:02 compute-0 sudo[195107]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:02 compute-0 sudo[195259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yymlqszxajzkbconcuaoonjyphjgidkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844142.478424-511-122662647680386/AnsiballZ_file.py'
Dec 04 10:29:02 compute-0 sudo[195259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:03 compute-0 python3.9[195261]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:29:03 compute-0 sudo[195259]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:03 compute-0 sudo[195411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwdqsravuxnudgcuaijrakmcepxzproc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844143.3890443-511-206692287118977/AnsiballZ_file.py'
Dec 04 10:29:03 compute-0 sudo[195411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:03 compute-0 ceph-mon[75358]: pgmap v546: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:03 compute-0 python3.9[195413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:29:03 compute-0 sudo[195411]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:04 compute-0 sudo[195563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjyhnmubzwhwpviigvksphcfczsynoak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844144.0334427-511-244419922378303/AnsiballZ_file.py'
Dec 04 10:29:04 compute-0 sudo[195563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:04 compute-0 python3.9[195565]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:29:04 compute-0 sudo[195563]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:04 compute-0 sudo[195715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfhqiqasvdxmjizescwxbzhdnvgpqpbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844144.6096501-511-272310293730120/AnsiballZ_file.py'
Dec 04 10:29:04 compute-0 sudo[195715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:05 compute-0 python3.9[195717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:29:05 compute-0 sudo[195715]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:05 compute-0 sudo[195869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdetppcdfnebqrrbscrsuunwoyfszmib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844145.1885092-511-195687125149901/AnsiballZ_file.py'
Dec 04 10:29:05 compute-0 sudo[195869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:05 compute-0 python3.9[195871]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:29:05 compute-0 sudo[195869]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:06 compute-0 ceph-mon[75358]: pgmap v547: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:06 compute-0 sshd-session[195718]: Invalid user opc from 103.179.218.243 port 42114
Dec 04 10:29:06 compute-0 sudo[196021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vylcbdcrcebodhuiwiwbzrzsbymezhsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844145.9168942-554-269859166839638/AnsiballZ_stat.py'
Dec 04 10:29:06 compute-0 sudo[196021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:06 compute-0 sshd-session[195718]: Received disconnect from 103.179.218.243 port 42114:11: Bye Bye [preauth]
Dec 04 10:29:06 compute-0 sshd-session[195718]: Disconnected from invalid user opc 103.179.218.243 port 42114 [preauth]
Dec 04 10:29:06 compute-0 python3.9[196023]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:06 compute-0 sudo[196024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:29:06 compute-0 sudo[196024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:29:06 compute-0 sudo[196024]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:06 compute-0 sudo[196021]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:06 compute-0 sudo[196050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:29:06 compute-0 sudo[196050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:29:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:07 compute-0 sudo[196050]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:29:07 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:29:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:29:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:29:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:29:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:29:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:29:07 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:29:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:29:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:29:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:29:07 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:29:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:29:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:29:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:29:07 compute-0 sudo[196198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:29:07 compute-0 sudo[196198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:29:07 compute-0 sudo[196198]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:07 compute-0 sudo[196254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngmyvdviaybermylsumaoqewvqsophka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844145.9168942-554-269859166839638/AnsiballZ_copy.py'
Dec 04 10:29:07 compute-0 sudo[196254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:07 compute-0 sudo[196252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:29:07 compute-0 sudo[196252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:29:07 compute-0 python3.9[196271]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844145.9168942-554-269859166839638/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:07 compute-0 sudo[196254]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:07 compute-0 podman[196313]: 2025-12-04 10:29:07.492509232 +0000 UTC m=+0.042557315 container create c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:29:07 compute-0 systemd[1]: Started libpod-conmon-c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda.scope.
Dec 04 10:29:07 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:29:07 compute-0 podman[196313]: 2025-12-04 10:29:07.472718215 +0000 UTC m=+0.022766328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:29:07 compute-0 podman[196313]: 2025-12-04 10:29:07.5783949 +0000 UTC m=+0.128443003 container init c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:29:07 compute-0 podman[196313]: 2025-12-04 10:29:07.587115298 +0000 UTC m=+0.137163381 container start c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 04 10:29:07 compute-0 podman[196313]: 2025-12-04 10:29:07.590744394 +0000 UTC m=+0.140792497 container attach c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:29:07 compute-0 exciting_hamilton[196356]: 167 167
Dec 04 10:29:07 compute-0 systemd[1]: libpod-c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda.scope: Deactivated successfully.
Dec 04 10:29:07 compute-0 conmon[196356]: conmon c7b9e1aa1a67400a3154 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda.scope/container/memory.events
Dec 04 10:29:07 compute-0 podman[196313]: 2025-12-04 10:29:07.60668299 +0000 UTC m=+0.156731103 container died c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b5b0195bbc257969c325dfdb0893f64615da488a4a79de8028075c9e08c7481-merged.mount: Deactivated successfully.
Dec 04 10:29:07 compute-0 podman[196313]: 2025-12-04 10:29:07.652441828 +0000 UTC m=+0.202489911 container remove c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:29:07 compute-0 systemd[1]: libpod-conmon-c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda.scope: Deactivated successfully.
Dec 04 10:29:07 compute-0 podman[196455]: 2025-12-04 10:29:07.819373337 +0000 UTC m=+0.045308186 container create 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:29:07 compute-0 sudo[196495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxamilbjwudzastmyoakpcoiizzznqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844147.51163-554-236402673375619/AnsiballZ_stat.py'
Dec 04 10:29:07 compute-0 sudo[196495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:07 compute-0 systemd[1]: Started libpod-conmon-27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902.scope.
Dec 04 10:29:07 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:29:07 compute-0 podman[196455]: 2025-12-04 10:29:07.799882167 +0000 UTC m=+0.025817016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:07 compute-0 podman[196455]: 2025-12-04 10:29:07.924080118 +0000 UTC m=+0.150014987 container init 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:29:07 compute-0 podman[196455]: 2025-12-04 10:29:07.933179167 +0000 UTC m=+0.159114006 container start 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:29:07 compute-0 podman[196455]: 2025-12-04 10:29:07.937626693 +0000 UTC m=+0.163561542 container attach 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:29:08 compute-0 python3.9[196498]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:08 compute-0 sudo[196495]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:08 compute-0 ceph-mon[75358]: pgmap v548: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:29:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:29:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:29:08 compute-0 sudo[196640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpicbwesgxgatmhfbxxhuulyclwrtvvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844147.51163-554-236402673375619/AnsiballZ_copy.py'
Dec 04 10:29:08 compute-0 sudo[196640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:08 compute-0 relaxed_diffie[196501]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:29:08 compute-0 relaxed_diffie[196501]: --> All data devices are unavailable
Dec 04 10:29:08 compute-0 systemd[1]: libpod-27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902.scope: Deactivated successfully.
Dec 04 10:29:08 compute-0 conmon[196501]: conmon 27afa0e007b2d4a3c596 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902.scope/container/memory.events
Dec 04 10:29:08 compute-0 podman[196455]: 2025-12-04 10:29:08.48118987 +0000 UTC m=+0.707124709 container died 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:29:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6-merged.mount: Deactivated successfully.
Dec 04 10:29:08 compute-0 podman[196455]: 2025-12-04 10:29:08.536388665 +0000 UTC m=+0.762323504 container remove 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 04 10:29:08 compute-0 systemd[1]: libpod-conmon-27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902.scope: Deactivated successfully.
Dec 04 10:29:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:08 compute-0 sudo[196252]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:08 compute-0 python3.9[196642]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844147.51163-554-236402673375619/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:08 compute-0 sudo[196640]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:08 compute-0 sudo[196659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:29:08 compute-0 sudo[196659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:29:08 compute-0 sudo[196659]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:08 compute-0 sudo[196692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:29:08 compute-0 sudo[196692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:29:08 compute-0 podman[196843]: 2025-12-04 10:29:08.997812273 +0000 UTC m=+0.039725801 container create 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:29:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:09 compute-0 systemd[1]: Started libpod-conmon-4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748.scope.
Dec 04 10:29:09 compute-0 podman[196843]: 2025-12-04 10:29:08.980982962 +0000 UTC m=+0.022896510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:29:09 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:29:09 compute-0 sudo[196887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkdnnapfzsmryyeanmuaofblmwjmabir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844148.7507305-554-20831354211948/AnsiballZ_stat.py'
Dec 04 10:29:09 compute-0 sudo[196887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:09 compute-0 podman[196843]: 2025-12-04 10:29:09.119938229 +0000 UTC m=+0.161851807 container init 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:29:09 compute-0 podman[196843]: 2025-12-04 10:29:09.12761489 +0000 UTC m=+0.169528428 container start 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 04 10:29:09 compute-0 podman[196843]: 2025-12-04 10:29:09.131118202 +0000 UTC m=+0.173031800 container attach 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 04 10:29:09 compute-0 priceless_black[196885]: 167 167
Dec 04 10:29:09 compute-0 systemd[1]: libpod-4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748.scope: Deactivated successfully.
Dec 04 10:29:09 compute-0 conmon[196885]: conmon 4089aeca5d6274d094c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748.scope/container/memory.events
Dec 04 10:29:09 compute-0 podman[196843]: 2025-12-04 10:29:09.135063735 +0000 UTC m=+0.176977283 container died 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-28228f99a6a9af619cb1a859b8820e6f6403065e69b4e172e0e145c807923a72-merged.mount: Deactivated successfully.
Dec 04 10:29:09 compute-0 podman[196843]: 2025-12-04 10:29:09.185201598 +0000 UTC m=+0.227115156 container remove 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:29:09 compute-0 systemd[1]: libpod-conmon-4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748.scope: Deactivated successfully.
Dec 04 10:29:09 compute-0 python3.9[196890]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:09 compute-0 podman[196912]: 2025-12-04 10:29:09.367775246 +0000 UTC m=+0.044297551 container create 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:29:09 compute-0 sudo[196887]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:09 compute-0 systemd[1]: Started libpod-conmon-2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664.scope.
Dec 04 10:29:09 compute-0 podman[196912]: 2025-12-04 10:29:09.346014287 +0000 UTC m=+0.022536612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:29:09 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:09 compute-0 podman[196912]: 2025-12-04 10:29:09.464409775 +0000 UTC m=+0.140932110 container init 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:29:09 compute-0 podman[196912]: 2025-12-04 10:29:09.473141554 +0000 UTC m=+0.149663869 container start 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:29:09 compute-0 podman[196912]: 2025-12-04 10:29:09.476299947 +0000 UTC m=+0.152822252 container attach 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:29:09 compute-0 sudo[197059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iixwsrjyhfbntevedpckwdtfilpgdpez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844148.7507305-554-20831354211948/AnsiballZ_copy.py'
Dec 04 10:29:09 compute-0 sudo[197059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:09 compute-0 competent_knuth[196930]: {
Dec 04 10:29:09 compute-0 competent_knuth[196930]:     "0": [
Dec 04 10:29:09 compute-0 competent_knuth[196930]:         {
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "devices": [
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "/dev/loop3"
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             ],
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_name": "ceph_lv0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_size": "21470642176",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "name": "ceph_lv0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "tags": {
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.cluster_name": "ceph",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.crush_device_class": "",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.encrypted": "0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.objectstore": "bluestore",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.osd_id": "0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.type": "block",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.vdo": "0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.with_tpm": "0"
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             },
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "type": "block",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "vg_name": "ceph_vg0"
Dec 04 10:29:09 compute-0 competent_knuth[196930]:         }
Dec 04 10:29:09 compute-0 competent_knuth[196930]:     ],
Dec 04 10:29:09 compute-0 competent_knuth[196930]:     "1": [
Dec 04 10:29:09 compute-0 competent_knuth[196930]:         {
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "devices": [
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "/dev/loop4"
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             ],
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_name": "ceph_lv1",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_size": "21470642176",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "name": "ceph_lv1",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "tags": {
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.cluster_name": "ceph",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.crush_device_class": "",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.encrypted": "0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.objectstore": "bluestore",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.osd_id": "1",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.type": "block",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.vdo": "0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.with_tpm": "0"
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             },
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "type": "block",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "vg_name": "ceph_vg1"
Dec 04 10:29:09 compute-0 competent_knuth[196930]:         }
Dec 04 10:29:09 compute-0 competent_knuth[196930]:     ],
Dec 04 10:29:09 compute-0 competent_knuth[196930]:     "2": [
Dec 04 10:29:09 compute-0 competent_knuth[196930]:         {
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "devices": [
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "/dev/loop5"
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             ],
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_name": "ceph_lv2",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_size": "21470642176",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "name": "ceph_lv2",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "tags": {
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.cluster_name": "ceph",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.crush_device_class": "",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.encrypted": "0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.objectstore": "bluestore",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.osd_id": "2",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.type": "block",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.vdo": "0",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:                 "ceph.with_tpm": "0"
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             },
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "type": "block",
Dec 04 10:29:09 compute-0 competent_knuth[196930]:             "vg_name": "ceph_vg2"
Dec 04 10:29:09 compute-0 competent_knuth[196930]:         }
Dec 04 10:29:09 compute-0 competent_knuth[196930]:     ]
Dec 04 10:29:09 compute-0 competent_knuth[196930]: }
Dec 04 10:29:09 compute-0 systemd[1]: libpod-2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664.scope: Deactivated successfully.
Dec 04 10:29:09 compute-0 podman[196912]: 2025-12-04 10:29:09.804724563 +0000 UTC m=+0.481246888 container died 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a-merged.mount: Deactivated successfully.
Dec 04 10:29:09 compute-0 podman[196912]: 2025-12-04 10:29:09.859170478 +0000 UTC m=+0.535692783 container remove 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:29:09 compute-0 systemd[1]: libpod-conmon-2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664.scope: Deactivated successfully.
Dec 04 10:29:09 compute-0 sudo[196692]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:09 compute-0 python3.9[197061]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844148.7507305-554-20831354211948/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:09 compute-0 sudo[197075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:29:09 compute-0 sudo[197075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:29:09 compute-0 sudo[197075]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:10 compute-0 sudo[197059]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:10 compute-0 sudo[197100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:29:10 compute-0 sudo[197100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:29:10 compute-0 ceph-mon[75358]: pgmap v549: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:10 compute-0 podman[197237]: 2025-12-04 10:29:10.324420425 +0000 UTC m=+0.039623517 container create 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:29:10 compute-0 systemd[1]: Started libpod-conmon-154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be.scope.
Dec 04 10:29:10 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:29:10 compute-0 sudo[197306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nytgjytnmqkbznctipmfbpasyhubhihd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844150.1244535-554-105992836077490/AnsiballZ_stat.py'
Dec 04 10:29:10 compute-0 podman[197237]: 2025-12-04 10:29:10.307915344 +0000 UTC m=+0.023118456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:29:10 compute-0 sudo[197306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:10 compute-0 podman[197237]: 2025-12-04 10:29:10.407301146 +0000 UTC m=+0.122504258 container init 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:29:10 compute-0 podman[197237]: 2025-12-04 10:29:10.413929678 +0000 UTC m=+0.129132770 container start 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:29:10 compute-0 podman[197237]: 2025-12-04 10:29:10.417450971 +0000 UTC m=+0.132654083 container attach 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:29:10 compute-0 great_visvesvaraya[197287]: 167 167
Dec 04 10:29:10 compute-0 systemd[1]: libpod-154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be.scope: Deactivated successfully.
Dec 04 10:29:10 compute-0 podman[197237]: 2025-12-04 10:29:10.420846869 +0000 UTC m=+0.136049961 container died 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-23cbbbcfaca470dcd37bb163d1de646685a3025f93e810f596b38f2cf5dcf23c-merged.mount: Deactivated successfully.
Dec 04 10:29:10 compute-0 podman[197237]: 2025-12-04 10:29:10.458759052 +0000 UTC m=+0.173962144 container remove 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 04 10:29:10 compute-0 systemd[1]: libpod-conmon-154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be.scope: Deactivated successfully.
Dec 04 10:29:10 compute-0 python3.9[197308]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:10 compute-0 podman[197330]: 2025-12-04 10:29:10.633835004 +0000 UTC m=+0.050894143 container create 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:29:10 compute-0 sudo[197306]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:10 compute-0 systemd[1]: Started libpod-conmon-36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8.scope.
Dec 04 10:29:10 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:29:10 compute-0 podman[197330]: 2025-12-04 10:29:10.613578794 +0000 UTC m=+0.030637953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:29:10 compute-0 podman[197330]: 2025-12-04 10:29:10.721695294 +0000 UTC m=+0.138754453 container init 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 04 10:29:10 compute-0 podman[197330]: 2025-12-04 10:29:10.732036784 +0000 UTC m=+0.149095923 container start 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:29:10 compute-0 podman[197330]: 2025-12-04 10:29:10.741841621 +0000 UTC m=+0.158900760 container attach 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:29:10 compute-0 sudo[197483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhfrhvazjspymhvbuafgvhcjzvefenco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844150.1244535-554-105992836077490/AnsiballZ_copy.py'
Dec 04 10:29:10 compute-0 sudo[197483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:11 compute-0 python3.9[197485]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844150.1244535-554-105992836077490/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:11 compute-0 sudo[197483]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:11 compute-0 lvm[197631]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:29:11 compute-0 lvm[197630]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:29:11 compute-0 lvm[197630]: VG ceph_vg1 finished
Dec 04 10:29:11 compute-0 lvm[197631]: VG ceph_vg0 finished
Dec 04 10:29:11 compute-0 lvm[197645]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:29:11 compute-0 lvm[197645]: VG ceph_vg2 finished
Dec 04 10:29:11 compute-0 wizardly_shamir[197352]: {}
Dec 04 10:29:11 compute-0 systemd[1]: libpod-36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8.scope: Deactivated successfully.
Dec 04 10:29:11 compute-0 podman[197330]: 2025-12-04 10:29:11.599170272 +0000 UTC m=+1.016229441 container died 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:29:11 compute-0 systemd[1]: libpod-36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8.scope: Consumed 1.379s CPU time.
Dec 04 10:29:11 compute-0 sudo[197704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gerdsvyleotbbepogsewwgyiaurdeqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844151.313199-554-213992733322014/AnsiballZ_stat.py'
Dec 04 10:29:11 compute-0 sudo[197704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815-merged.mount: Deactivated successfully.
Dec 04 10:29:11 compute-0 podman[197330]: 2025-12-04 10:29:11.676869325 +0000 UTC m=+1.093928474 container remove 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:29:11 compute-0 systemd[1]: libpod-conmon-36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8.scope: Deactivated successfully.
Dec 04 10:29:11 compute-0 sudo[197100]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:29:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:29:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:29:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:29:11 compute-0 python3.9[197712]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:11 compute-0 sudo[197721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:29:11 compute-0 sudo[197721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:29:11 compute-0 sudo[197721]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:11 compute-0 sudo[197704]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:12 compute-0 ceph-mon[75358]: pgmap v550: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:29:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:29:12 compute-0 sudo[197868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaezcpiybhjjgrphtqyisoygdrlaurnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844151.313199-554-213992733322014/AnsiballZ_copy.py'
Dec 04 10:29:12 compute-0 sudo[197868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:12 compute-0 python3.9[197870]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844151.313199-554-213992733322014/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:12 compute-0 sudo[197868]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:12 compute-0 sudo[198020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmqzmfkwwrjtvimwaboamkaxvntfbfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844152.5878441-554-279224405908328/AnsiballZ_stat.py'
Dec 04 10:29:12 compute-0 sudo[198020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:13 compute-0 python3.9[198022]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:13 compute-0 sudo[198020]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:13 compute-0 sudo[198145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdairhnytqconsycnljxtslspptbfdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844152.5878441-554-279224405908328/AnsiballZ_copy.py'
Dec 04 10:29:13 compute-0 sudo[198145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:13 compute-0 python3.9[198147]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844152.5878441-554-279224405908328/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:13 compute-0 sudo[198145]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:14 compute-0 sudo[198297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afzqkjeybozzihafoosnmkalcdgmmfcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844153.759381-554-99471758427157/AnsiballZ_stat.py'
Dec 04 10:29:14 compute-0 sudo[198297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:14 compute-0 ceph-mon[75358]: pgmap v551: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:14 compute-0 python3.9[198299]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:14 compute-0 sudo[198297]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:14 compute-0 sudo[198420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glftwdamwsrpaujqejystibserfgiavq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844153.759381-554-99471758427157/AnsiballZ_copy.py'
Dec 04 10:29:14 compute-0 sudo[198420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:14 compute-0 python3.9[198422]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844153.759381-554-99471758427157/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:14 compute-0 sudo[198420]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:15 compute-0 sudo[198572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujbwoatavtfpbpnoqnrapchmubdbfay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844154.8776581-554-259186467013238/AnsiballZ_stat.py'
Dec 04 10:29:15 compute-0 sudo[198572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:15 compute-0 python3.9[198574]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:15 compute-0 sudo[198572]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:15 compute-0 ceph-mon[75358]: pgmap v552: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:15 compute-0 sudo[198697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tttvsoiwfcjaautyvmzibqkcyktkjzoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844154.8776581-554-259186467013238/AnsiballZ_copy.py'
Dec 04 10:29:15 compute-0 sudo[198697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:15 compute-0 python3.9[198699]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844154.8776581-554-259186467013238/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:16 compute-0 sudo[198697]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:16 compute-0 auditd[705]: Audit daemon rotating log files
Dec 04 10:29:16 compute-0 sudo[198849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylwexlrdoojmfrperpysvltcfhoqfgdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844156.2771392-667-62567774661040/AnsiballZ_command.py'
Dec 04 10:29:16 compute-0 sudo[198849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:16 compute-0 python3.9[198851]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 04 10:29:16 compute-0 sudo[198849]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:17 compute-0 sudo[199002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioiqnmvbvtxscudmvlslailmmphgfmqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844156.948884-676-181464732761542/AnsiballZ_file.py'
Dec 04 10:29:17 compute-0 sudo[199002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:17 compute-0 python3.9[199004]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:17 compute-0 sudo[199002]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:17 compute-0 sudo[199154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tppxjsdsjtpobksvpkwacknrdytmowpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844157.5577085-676-222565978665646/AnsiballZ_file.py'
Dec 04 10:29:17 compute-0 sudo[199154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:18 compute-0 python3.9[199156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:18 compute-0 ceph-mon[75358]: pgmap v553: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:18 compute-0 sudo[199154]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:18 compute-0 sudo[199306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcdrzdnjupfbhmaysdtkefwyrzemqzuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844158.2135756-676-249503470496798/AnsiballZ_file.py'
Dec 04 10:29:18 compute-0 sudo[199306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:18 compute-0 python3.9[199308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:18 compute-0 sudo[199306]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:19 compute-0 sudo[199458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvfmetczncittvjxzvsomevrnhpfvlls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844158.7963805-676-48580278407593/AnsiballZ_file.py'
Dec 04 10:29:19 compute-0 sudo[199458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:19 compute-0 python3.9[199460]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:19 compute-0 sudo[199458]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:19 compute-0 sudo[199610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inasuieukkvfoemwnkwwvvnorusyrvxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844159.4468803-676-242804571993570/AnsiballZ_file.py'
Dec 04 10:29:19 compute-0 sudo[199610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:19 compute-0 python3.9[199612]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:19 compute-0 sudo[199610]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:20 compute-0 ceph-mon[75358]: pgmap v554: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:20 compute-0 sudo[199762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjjyfmraivzqmddybbirarylsifwbjts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844160.0319736-676-263372818698966/AnsiballZ_file.py'
Dec 04 10:29:20 compute-0 sudo[199762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:20 compute-0 python3.9[199764]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:20 compute-0 sudo[199762]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:20 compute-0 sudo[199914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dektrywunkhpeacueqnckgayvwezqdlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844160.6255937-676-165951343629088/AnsiballZ_file.py'
Dec 04 10:29:20 compute-0 sudo[199914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:21 compute-0 python3.9[199916]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:21 compute-0 sudo[199914]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:21 compute-0 sudo[200066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfgmmjfvudyfsfvuetghkyogwumufet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844161.2502065-676-250501995790688/AnsiballZ_file.py'
Dec 04 10:29:21 compute-0 sudo[200066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:21 compute-0 ceph-mon[75358]: pgmap v555: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:22 compute-0 python3.9[200068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:22 compute-0 sudo[200066]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:22 compute-0 sudo[200218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzuhjsfcioqsmqtiwpvckpocjhynnsdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844162.1810217-676-263566587153006/AnsiballZ_file.py'
Dec 04 10:29:22 compute-0 sudo[200218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:22 compute-0 python3.9[200220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:22 compute-0 sudo[200218]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:23 compute-0 sudo[200370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sikzyrtbknsjcstzbzvgeevjeovsclvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844162.8370574-676-115862096524021/AnsiballZ_file.py'
Dec 04 10:29:23 compute-0 sudo[200370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:23 compute-0 python3.9[200372]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:23 compute-0 sudo[200370]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:23 compute-0 sudo[200531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbytcvvhrtudbtiiclcdajnuhsknorpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844163.4987614-676-106986388360224/AnsiballZ_file.py'
Dec 04 10:29:23 compute-0 sudo[200531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:23 compute-0 podman[200496]: 2025-12-04 10:29:23.902203057 +0000 UTC m=+0.141957549 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 04 10:29:23 compute-0 python3.9[200541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:24 compute-0 sudo[200531]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:24 compute-0 ceph-mon[75358]: pgmap v556: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:24 compute-0 sudo[200700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inwzofbfqjyxwknvhlogfjscmewtramd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844164.133058-676-99455107818468/AnsiballZ_file.py'
Dec 04 10:29:24 compute-0 sudo[200700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:24 compute-0 python3.9[200702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:24 compute-0 sudo[200700]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:24 compute-0 sudo[200852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jklrvzgnzuzrbqbkgkrgfnswstiaubrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844164.7330368-676-45127666423776/AnsiballZ_file.py'
Dec 04 10:29:24 compute-0 sudo[200852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:25 compute-0 python3.9[200854]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:25 compute-0 sudo[200852]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:25 compute-0 sudo[201020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpasltovzdoreddgrgajhepsewcyfvki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844165.3492837-676-70268292217166/AnsiballZ_file.py'
Dec 04 10:29:25 compute-0 podman[200978]: 2025-12-04 10:29:25.666606033 +0000 UTC m=+0.061953289 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:29:25 compute-0 sudo[201020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:25 compute-0 python3.9[201025]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:25 compute-0 sudo[201020]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:26 compute-0 ceph-mon[75358]: pgmap v557: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:26 compute-0 sudo[201175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnxmuvccrrfwxcmpgbgjpujtzmolubfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844166.0674963-775-224813271706822/AnsiballZ_stat.py'
Dec 04 10:29:26 compute-0 sudo[201175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:26 compute-0 python3.9[201177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:26 compute-0 sudo[201175]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:29:26
Dec 04 10:29:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:29:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:29:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root']
Dec 04 10:29:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:29:26 compute-0 sudo[201300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nogwsmpgditaygwdwrwcwgmvnfxldtjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844166.0674963-775-224813271706822/AnsiballZ_copy.py'
Dec 04 10:29:26 compute-0 sudo[201300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:27 compute-0 python3.9[201302]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844166.0674963-775-224813271706822/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:27 compute-0 sudo[201300]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:27 compute-0 ceph-mon[75358]: pgmap v558: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:27 compute-0 sudo[201452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxfyunrpbvphvuftoejoeygdydkoxvxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844167.310295-775-98094292437953/AnsiballZ_stat.py'
Dec 04 10:29:27 compute-0 sudo[201452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:27 compute-0 python3.9[201454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:27 compute-0 sudo[201452]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:29:27 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:29:28 compute-0 sshd-session[201178]: Received disconnect from 103.149.86.230 port 55872:11: Bye Bye [preauth]
Dec 04 10:29:28 compute-0 sshd-session[201178]: Disconnected from authenticating user root 103.149.86.230 port 55872 [preauth]
Dec 04 10:29:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:29:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:29:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:29:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:29:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:29:28 compute-0 sudo[201575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmzvkiauegdkdjimitjxfqevkisvlunk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844167.310295-775-98094292437953/AnsiballZ_copy.py'
Dec 04 10:29:28 compute-0 sudo[201575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:28 compute-0 sshd-session[201578]: Invalid user syncthing from 107.175.213.239 port 40954
Dec 04 10:29:28 compute-0 sshd-session[201578]: Received disconnect from 107.175.213.239 port 40954:11: Bye Bye [preauth]
Dec 04 10:29:28 compute-0 sshd-session[201578]: Disconnected from invalid user syncthing 107.175.213.239 port 40954 [preauth]
Dec 04 10:29:28 compute-0 python3.9[201577]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844167.310295-775-98094292437953/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:28 compute-0 sudo[201575]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:28 compute-0 sudo[201729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywxrdrlnilfbsxbuhlbnjioqdzfkzywn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844168.4497063-775-72542875163687/AnsiballZ_stat.py'
Dec 04 10:29:28 compute-0 sudo[201729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:28 compute-0 python3.9[201731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:28 compute-0 sudo[201729]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:29 compute-0 sudo[201853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzlgcovglccnebmqnzlyqxqfpurbwyms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844168.4497063-775-72542875163687/AnsiballZ_copy.py'
Dec 04 10:29:29 compute-0 sudo[201853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:29 compute-0 python3.9[201856]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844168.4497063-775-72542875163687/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:29 compute-0 sudo[201853]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:29 compute-0 sshd-session[201847]: Invalid user syncthing from 74.249.218.27 port 41556
Dec 04 10:29:29 compute-0 sshd-session[201847]: Received disconnect from 74.249.218.27 port 41556:11: Bye Bye [preauth]
Dec 04 10:29:29 compute-0 sshd-session[201847]: Disconnected from invalid user syncthing 74.249.218.27 port 41556 [preauth]
Dec 04 10:29:29 compute-0 sudo[202006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgfcrchokmyquzxfxxccokwgbsvocvoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844169.5283613-775-135504904442289/AnsiballZ_stat.py'
Dec 04 10:29:29 compute-0 sudo[202006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:29 compute-0 python3.9[202008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:29 compute-0 sudo[202006]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:30 compute-0 ceph-mon[75358]: pgmap v559: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:30 compute-0 sudo[202129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uirahnsvqfixzkxqfmgdnqwpteqjsygi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844169.5283613-775-135504904442289/AnsiballZ_copy.py'
Dec 04 10:29:30 compute-0 sudo[202129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:30 compute-0 python3.9[202131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844169.5283613-775-135504904442289/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:30 compute-0 sudo[202129]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:30 compute-0 sudo[202283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpkrmikfpifupblysiodevommgehmqup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844170.5996962-775-25796917775519/AnsiballZ_stat.py'
Dec 04 10:29:30 compute-0 sudo[202283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:30 compute-0 sshd-session[202132]: Invalid user teste from 217.154.62.22 port 36978
Dec 04 10:29:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:31 compute-0 sshd-session[202132]: Received disconnect from 217.154.62.22 port 36978:11: Bye Bye [preauth]
Dec 04 10:29:31 compute-0 sshd-session[202132]: Disconnected from invalid user teste 217.154.62.22 port 36978 [preauth]
Dec 04 10:29:31 compute-0 python3.9[202285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:31 compute-0 sudo[202283]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:31 compute-0 sudo[202406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhuyocyhwadgeikywqlfbukqczgvytym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844170.5996962-775-25796917775519/AnsiballZ_copy.py'
Dec 04 10:29:31 compute-0 sudo[202406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:31 compute-0 python3.9[202408]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844170.5996962-775-25796917775519/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:31 compute-0 sudo[202406]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:32 compute-0 ceph-mon[75358]: pgmap v560: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:32 compute-0 sudo[202558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyiptpwvpabuvjcyrntainodlcqgowvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844171.9442172-775-63282791572148/AnsiballZ_stat.py'
Dec 04 10:29:32 compute-0 sudo[202558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:32 compute-0 python3.9[202560]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:32 compute-0 sudo[202558]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:32 compute-0 sudo[202681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzrrrxpevbhpttuadbuniocxozthjxla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844171.9442172-775-63282791572148/AnsiballZ_copy.py'
Dec 04 10:29:32 compute-0 sudo[202681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:33 compute-0 python3.9[202683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844171.9442172-775-63282791572148/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:33 compute-0 sudo[202681]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:33 compute-0 sudo[202833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzdzqburqolkzuyuwgtblekyxvtmudja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844173.2911599-775-52485824383788/AnsiballZ_stat.py'
Dec 04 10:29:33 compute-0 sudo[202833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:33 compute-0 python3.9[202835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:33 compute-0 sudo[202833]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:34 compute-0 sudo[202956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzjvaimkhfgkakbsmwfvwegkhydjrear ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844173.2911599-775-52485824383788/AnsiballZ_copy.py'
Dec 04 10:29:34 compute-0 sudo[202956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:34 compute-0 ceph-mon[75358]: pgmap v561: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:34 compute-0 python3.9[202958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844173.2911599-775-52485824383788/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:34 compute-0 sudo[202956]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:34 compute-0 sudo[203108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aviidwvucpzwkmngltpwrwiciddpcqwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844174.5129876-775-40357442603023/AnsiballZ_stat.py'
Dec 04 10:29:34 compute-0 sudo[203108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:34 compute-0 python3.9[203110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:34 compute-0 sudo[203108]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:35 compute-0 ceph-mon[75358]: pgmap v562: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:35 compute-0 sudo[203231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkenjyrydmknuynqggacxewjxvwvzhci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844174.5129876-775-40357442603023/AnsiballZ_copy.py'
Dec 04 10:29:35 compute-0 sudo[203231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:35 compute-0 python3.9[203233]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844174.5129876-775-40357442603023/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:35 compute-0 sudo[203231]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:35 compute-0 sudo[203383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tihxcmyqgooopoayjdfiddiwweilxvnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844175.611288-775-120937440869745/AnsiballZ_stat.py'
Dec 04 10:29:35 compute-0 sudo[203383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:36 compute-0 python3.9[203385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:36 compute-0 sudo[203383]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:36 compute-0 sudo[203506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idvwlsorhyzzyohdeuqsfxrgmciyncqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844175.611288-775-120937440869745/AnsiballZ_copy.py'
Dec 04 10:29:36 compute-0 sudo[203506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:36 compute-0 python3.9[203508]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844175.611288-775-120937440869745/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:36 compute-0 sudo[203506]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:29:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:29:37 compute-0 sudo[203658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tauqnfjyllnpscobdayljylsrrwxjqjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844176.781864-775-213560848679011/AnsiballZ_stat.py'
Dec 04 10:29:37 compute-0 sudo[203658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:37 compute-0 python3.9[203660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:37 compute-0 sudo[203658]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:37 compute-0 sudo[203781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fluocoslrlxlerbmndkwwefztavlrgeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844176.781864-775-213560848679011/AnsiballZ_copy.py'
Dec 04 10:29:37 compute-0 sudo[203781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:37 compute-0 python3.9[203783]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844176.781864-775-213560848679011/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:37 compute-0 sudo[203781]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:38 compute-0 ceph-mon[75358]: pgmap v563: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:38 compute-0 sudo[203933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkmgfhvbthkiajybylynvufwqgcpfgjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844177.9276597-775-186947216705732/AnsiballZ_stat.py'
Dec 04 10:29:38 compute-0 sudo[203933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:38 compute-0 python3.9[203935]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:38 compute-0 sudo[203933]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:38 compute-0 sudo[204056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdkrczvojxjmknyqxauiikquqbjpcqfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844177.9276597-775-186947216705732/AnsiballZ_copy.py'
Dec 04 10:29:38 compute-0 sudo[204056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:38 compute-0 python3.9[204058]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844177.9276597-775-186947216705732/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:38 compute-0 sudo[204056]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:39 compute-0 sudo[204208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldlxeaxtjlqkxctcejkvlmdtmrudyroo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844179.0446048-775-280373490400858/AnsiballZ_stat.py'
Dec 04 10:29:39 compute-0 sudo[204208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:39 compute-0 ceph-mon[75358]: pgmap v564: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:39 compute-0 python3.9[204210]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:39 compute-0 sudo[204208]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:39 compute-0 sudo[204331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uesqchcgqcugvcmkllminmohkwyvcocp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844179.0446048-775-280373490400858/AnsiballZ_copy.py'
Dec 04 10:29:39 compute-0 sudo[204331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:39 compute-0 python3.9[204333]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844179.0446048-775-280373490400858/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:39 compute-0 sudo[204331]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:40 compute-0 sudo[204483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgkonbimktlqrhhkatuonphlnubvuzbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844180.0977564-775-102688361263797/AnsiballZ_stat.py'
Dec 04 10:29:40 compute-0 sudo[204483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:40 compute-0 python3.9[204485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:40 compute-0 sudo[204483]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:40 compute-0 sudo[204606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axrwihdhotkmdfhirkztbysqkghuzaat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844180.0977564-775-102688361263797/AnsiballZ_copy.py'
Dec 04 10:29:40 compute-0 sudo[204606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:41 compute-0 python3.9[204608]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844180.0977564-775-102688361263797/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:41 compute-0 sudo[204606]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:41 compute-0 sudo[204758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgatltkkwiqemrescwlnjtgyiaqqllji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844181.2038312-775-235087340015284/AnsiballZ_stat.py'
Dec 04 10:29:41 compute-0 sudo[204758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:41 compute-0 python3.9[204760]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:29:41 compute-0 sudo[204758]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:42 compute-0 sudo[204881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyounoqislmgwlseubjnzqcqmzgofomt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844181.2038312-775-235087340015284/AnsiballZ_copy.py'
Dec 04 10:29:42 compute-0 sudo[204881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:42 compute-0 ceph-mon[75358]: pgmap v565: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:42 compute-0 python3.9[204883]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844181.2038312-775-235087340015284/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:42 compute-0 sudo[204881]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:42 compute-0 python3.9[205033]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:29:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:43 compute-0 sudo[205186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvvlbbidpwozxwqvsyzziecfryycuayo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844183.126694-981-79117242599591/AnsiballZ_seboolean.py'
Dec 04 10:29:43 compute-0 sudo[205186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:43 compute-0 python3.9[205188]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 04 10:29:44 compute-0 ceph-mon[75358]: pgmap v566: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:45 compute-0 ceph-mon[75358]: pgmap v567: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:45 compute-0 sudo[205186]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:46 compute-0 sudo[205342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ualvklaldajtlanihseybyxxnkskdvtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844185.7683866-989-13584497664510/AnsiballZ_copy.py'
Dec 04 10:29:46 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 04 10:29:46 compute-0 sudo[205342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:46 compute-0 python3.9[205344]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:46 compute-0 sudo[205342]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:46 compute-0 sudo[205494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czzkjjfopsiezqxsdxdumgcecyvntgaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844186.3615592-989-9823684016846/AnsiballZ_copy.py'
Dec 04 10:29:46 compute-0 sudo[205494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:46 compute-0 python3.9[205496]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:47 compute-0 sudo[205494]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:47 compute-0 sudo[205646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbnlrghdpiouiitakbppfjvgfonibtym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844187.128499-989-4785343140009/AnsiballZ_copy.py'
Dec 04 10:29:47 compute-0 sudo[205646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:47 compute-0 python3.9[205648]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:47 compute-0 sudo[205646]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:48 compute-0 ceph-mon[75358]: pgmap v568: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:48 compute-0 sudo[205798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlibhgqitjuhvpizihphhkrqtmoudamv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844188.0081856-989-240687306011451/AnsiballZ_copy.py'
Dec 04 10:29:48 compute-0 sudo[205798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:48 compute-0 python3.9[205800]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:48 compute-0 sudo[205798]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:48 compute-0 sudo[205950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohvrpygbfdrbregdlgswjbdtfqgsnlgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844188.558526-989-64973467203491/AnsiballZ_copy.py'
Dec 04 10:29:48 compute-0 sudo[205950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:48 compute-0 python3.9[205952]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:49 compute-0 sudo[205950]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:49 compute-0 sudo[206102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydbzloqmuugadespytrthzwyeyyehort ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844189.1765149-1025-264331989371375/AnsiballZ_copy.py'
Dec 04 10:29:49 compute-0 sudo[206102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:49 compute-0 python3.9[206104]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:49 compute-0 sudo[206102]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:50 compute-0 sudo[206254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ganzowvpuskznilxkrhyogydfeakdplw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844189.812322-1025-103157595507688/AnsiballZ_copy.py'
Dec 04 10:29:50 compute-0 sudo[206254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:50 compute-0 ceph-mon[75358]: pgmap v569: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:50 compute-0 python3.9[206256]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:50 compute-0 sudo[206254]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:50 compute-0 sudo[206406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bamcgxjnecyfagswxbgawkbgyvsfntme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844190.4212053-1025-165748715130771/AnsiballZ_copy.py'
Dec 04 10:29:50 compute-0 sudo[206406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:50 compute-0 python3.9[206408]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:50 compute-0 sudo[206406]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:51 compute-0 sudo[206558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyodvgejpqyrwvdqdnhjkksrhhgoqoau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844190.9944115-1025-8734823148209/AnsiballZ_copy.py'
Dec 04 10:29:51 compute-0 sudo[206558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:51 compute-0 python3.9[206560]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:51 compute-0 sudo[206558]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:51 compute-0 sudo[206710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwkfphlaqprootzbcdatmjabkjesevvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844191.5722213-1025-124366421360622/AnsiballZ_copy.py'
Dec 04 10:29:51 compute-0 sudo[206710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:51 compute-0 python3.9[206712]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:52 compute-0 sudo[206710]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:52 compute-0 ceph-mon[75358]: pgmap v570: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:52 compute-0 sudo[206862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znsoezahuoqnhrkzmhvyzwgrqbzhockc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844192.1785212-1061-264785137597717/AnsiballZ_systemd.py'
Dec 04 10:29:52 compute-0 sudo[206862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:52 compute-0 python3.9[206864]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:29:52 compute-0 systemd[1]: Reloading.
Dec 04 10:29:52 compute-0 systemd-rc-local-generator[206891]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:29:52 compute-0 systemd-sysv-generator[206894]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:29:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:53 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 04 10:29:53 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 04 10:29:53 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 04 10:29:53 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 04 10:29:53 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 04 10:29:53 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 04 10:29:53 compute-0 sudo[206862]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:53 compute-0 sudo[207054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rckcrvensqchuffthxtscfonkjzltvrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844193.3864727-1061-156430412926802/AnsiballZ_systemd.py'
Dec 04 10:29:53 compute-0 sudo[207054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:53 compute-0 python3.9[207056]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:29:54 compute-0 systemd[1]: Reloading.
Dec 04 10:29:54 compute-0 systemd-rc-local-generator[207104]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:29:54 compute-0 systemd-sysv-generator[207110]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:29:54 compute-0 podman[207058]: 2025-12-04 10:29:54.150530574 +0000 UTC m=+0.130692282 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 04 10:29:54 compute-0 ceph-mon[75358]: pgmap v571: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:54 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 04 10:29:54 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 04 10:29:54 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 04 10:29:54 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 04 10:29:54 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 04 10:29:54 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 04 10:29:54 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 04 10:29:54 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 04 10:29:54 compute-0 sudo[207054]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:54 compute-0 sudo[207295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biunygltvxswlkxkxackosjboiwzbzxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844194.5838776-1061-146290663526258/AnsiballZ_systemd.py'
Dec 04 10:29:54 compute-0 sudo[207295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:29:54.895 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:29:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:29:54.897 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:29:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:29:54.897 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:29:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:55 compute-0 python3.9[207297]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:29:55 compute-0 systemd[1]: Reloading.
Dec 04 10:29:55 compute-0 ceph-mon[75358]: pgmap v572: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:55 compute-0 systemd-rc-local-generator[207322]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:29:55 compute-0 systemd-sysv-generator[207326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:29:55 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 04 10:29:55 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 04 10:29:55 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 04 10:29:55 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 04 10:29:55 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 04 10:29:55 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 04 10:29:55 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 04 10:29:55 compute-0 sudo[207295]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:55 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 04 10:29:55 compute-0 podman[207434]: 2025-12-04 10:29:55.917035813 +0000 UTC m=+0.056478559 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 04 10:29:55 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 04 10:29:56 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 04 10:29:56 compute-0 sudo[207534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddqdjlrhqmfbbflzzfitbzjdkhjxspev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844195.756592-1061-28665614463928/AnsiballZ_systemd.py'
Dec 04 10:29:56 compute-0 sudo[207534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:56 compute-0 python3.9[207536]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:29:56 compute-0 systemd[1]: Reloading.
Dec 04 10:29:56 compute-0 systemd-rc-local-generator[207565]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:29:56 compute-0 systemd-sysv-generator[207569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:29:56 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 04 10:29:56 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 04 10:29:56 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 04 10:29:56 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 04 10:29:56 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 04 10:29:56 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 04 10:29:56 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 04 10:29:56 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 04 10:29:56 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 04 10:29:56 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 04 10:29:56 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 04 10:29:56 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 04 10:29:56 compute-0 sudo[207534]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:56 compute-0 setroubleshoot[207334]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 849d845a-6d91-417d-93c6-3983faec16d6
Dec 04 10:29:56 compute-0 setroubleshoot[207334]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 04 10:29:56 compute-0 setroubleshoot[207334]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 849d845a-6d91-417d-93c6-3983faec16d6
Dec 04 10:29:56 compute-0 setroubleshoot[207334]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 04 10:29:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:57 compute-0 sudo[207752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indziawbgzmktllcknqqcfnvgnskwzks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844196.945536-1061-111294782511715/AnsiballZ_systemd.py'
Dec 04 10:29:57 compute-0 sudo[207752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:57 compute-0 python3.9[207754]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:29:57 compute-0 systemd[1]: Reloading.
Dec 04 10:29:57 compute-0 systemd-rc-local-generator[207779]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:29:57 compute-0 systemd-sysv-generator[207783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:29:57 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 04 10:29:57 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 04 10:29:57 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 04 10:29:57 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 04 10:29:57 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 04 10:29:57 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 04 10:29:57 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 04 10:29:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:29:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:29:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:29:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:29:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:29:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:29:57 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 04 10:29:57 compute-0 sudo[207752]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:58 compute-0 ceph-mon[75358]: pgmap v573: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:58 compute-0 sudo[207963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlnmbsraadyrnogdfyeolypagugthgac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844198.2042682-1098-47762276529951/AnsiballZ_file.py'
Dec 04 10:29:58 compute-0 sudo[207963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:29:58 compute-0 python3.9[207965]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:29:58 compute-0 sudo[207963]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:29:59 compute-0 sudo[208115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awgtwjbwfmdzcensulxznzrxpqakkcsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844198.7943265-1106-210006496777370/AnsiballZ_find.py'
Dec 04 10:29:59 compute-0 sudo[208115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:29:59 compute-0 python3.9[208117]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 04 10:29:59 compute-0 sudo[208115]: pam_unix(sudo:session): session closed for user root
Dec 04 10:29:59 compute-0 sudo[208267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyadwgehktpfrpioguasnboscamuffri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844199.4240115-1114-221509944670368/AnsiballZ_command.py'
Dec 04 10:29:59 compute-0 sudo[208267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:00 compute-0 python3.9[208269]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:30:00 compute-0 sudo[208267]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:00 compute-0 ceph-mon[75358]: pgmap v574: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:00 compute-0 python3.9[208423]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 04 10:30:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:01 compute-0 python3.9[208573]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:02 compute-0 python3.9[208694]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844201.139754-1133-66970071124744/.source.xml follow=False _original_basename=secret.xml.j2 checksum=48aecb49cd31a3c01b7ae17e3d1019c6e6eee501 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:02 compute-0 ceph-mon[75358]: pgmap v575: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:02 compute-0 sudo[208844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htykboisbopzpmiawmsxglctfcbjrxkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844202.3607526-1148-168843124267098/AnsiballZ_command.py'
Dec 04 10:30:02 compute-0 sudo[208844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:02 compute-0 python3.9[208846]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:30:02 compute-0 polkitd[43629]: Registered Authentication Agent for unix-process:208848:331248 (system bus name :1.2582 [pkttyagent --process 208848 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 04 10:30:02 compute-0 polkitd[43629]: Unregistered Authentication Agent for unix-process:208848:331248 (system bus name :1.2582, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 04 10:30:02 compute-0 polkitd[43629]: Registered Authentication Agent for unix-process:208847:331247 (system bus name :1.2583 [pkttyagent --process 208847 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 04 10:30:02 compute-0 polkitd[43629]: Unregistered Authentication Agent for unix-process:208847:331247 (system bus name :1.2583, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 04 10:30:03 compute-0 sudo[208844]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:03 compute-0 ceph-mon[75358]: pgmap v576: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:03 compute-0 python3.9[209008]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:04 compute-0 sudo[209158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buagtbwcxmflzmfgyxrdlkqadovihjsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844203.8741484-1164-150787144566440/AnsiballZ_command.py'
Dec 04 10:30:04 compute-0 sudo[209158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:04 compute-0 sudo[209158]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:04 compute-0 sudo[209311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coljhojxyowjnfddlhulwaeupmdthben ; FSID=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d KEY=AQC7XjFpAAAAABAAfAp/GPFiYDh+96uFEDn7ew== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844204.5326018-1172-226843084339871/AnsiballZ_command.py'
Dec 04 10:30:04 compute-0 sudo[209311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:04 compute-0 polkitd[43629]: Registered Authentication Agent for unix-process:209314:331461 (system bus name :1.2586 [pkttyagent --process 209314 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 04 10:30:05 compute-0 polkitd[43629]: Unregistered Authentication Agent for unix-process:209314:331461 (system bus name :1.2586, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 04 10:30:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:05 compute-0 sudo[209311]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:05 compute-0 ceph-mon[75358]: pgmap v577: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:05 compute-0 sudo[209469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edflnrwxakbsnciuhgdmggczvoqxjykl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844205.3743742-1180-64921587201984/AnsiballZ_copy.py'
Dec 04 10:30:05 compute-0 sudo[209469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:05 compute-0 python3.9[209471]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:05 compute-0 sudo[209469]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:06 compute-0 sudo[209621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvqxxoplqchtdghkybsrowzvoutoxjmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844206.0544598-1188-139999483372688/AnsiballZ_stat.py'
Dec 04 10:30:06 compute-0 sudo[209621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:06 compute-0 python3.9[209623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:06 compute-0 sudo[209621]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:06 compute-0 sudo[209744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-humquvsswyyussefnltygrdvdytboium ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844206.0544598-1188-139999483372688/AnsiballZ_copy.py'
Dec 04 10:30:06 compute-0 sudo[209744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:07 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 04 10:30:07 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.000s CPU time.
Dec 04 10:30:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:07 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 04 10:30:07 compute-0 python3.9[209746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844206.0544598-1188-139999483372688/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:07 compute-0 sudo[209744]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:07 compute-0 sudo[209897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvlrddxaefpkbdcjxgehpbchamwxbklj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844207.4631312-1204-232977501121739/AnsiballZ_file.py'
Dec 04 10:30:07 compute-0 sudo[209897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:07 compute-0 python3.9[209899]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:08 compute-0 sudo[209897]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:08 compute-0 ceph-mon[75358]: pgmap v578: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:08 compute-0 sudo[210049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wisqdcdwovbhmgrynvttoyngxilzugdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844208.1610076-1212-76492478589603/AnsiballZ_stat.py'
Dec 04 10:30:08 compute-0 sudo[210049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:08 compute-0 python3.9[210051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:08 compute-0 sudo[210049]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:08 compute-0 sudo[210127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiifuavudobbcxpfkgatjhgmiwwrunwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844208.1610076-1212-76492478589603/AnsiballZ_file.py'
Dec 04 10:30:08 compute-0 sudo[210127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:09 compute-0 python3.9[210129]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:09 compute-0 sudo[210127]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:09 compute-0 sudo[210279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oszmpmeugrqsutmvgjfeubghijmyusfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844209.2727723-1224-1918197593766/AnsiballZ_stat.py'
Dec 04 10:30:09 compute-0 sudo[210279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:09 compute-0 python3.9[210281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:09 compute-0 sudo[210279]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:10 compute-0 sudo[210357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzgmpzwdfpvvblphkxqjpnxqghzwglat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844209.2727723-1224-1918197593766/AnsiballZ_file.py'
Dec 04 10:30:10 compute-0 sudo[210357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:10 compute-0 ceph-mon[75358]: pgmap v579: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:10 compute-0 python3.9[210359]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.cci2evha recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:10 compute-0 sudo[210357]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:10 compute-0 sudo[210509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqfcufprvoeccbnyruuxsnvrdvyadwel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844210.3997507-1236-60149910892452/AnsiballZ_stat.py'
Dec 04 10:30:10 compute-0 sudo[210509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:10 compute-0 python3.9[210511]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:10 compute-0 sudo[210509]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:11 compute-0 sudo[210587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttieusumgfbrayfsnqgulwsruidcnkug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844210.3997507-1236-60149910892452/AnsiballZ_file.py'
Dec 04 10:30:11 compute-0 sudo[210587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:11 compute-0 ceph-mon[75358]: pgmap v580: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:11 compute-0 python3.9[210589]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:11 compute-0 sudo[210587]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:11 compute-0 sudo[210739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcclhwgkldfsthsmfwktbqtabcyqifgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844211.4252574-1249-42311784109682/AnsiballZ_command.py'
Dec 04 10:30:11 compute-0 sudo[210739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:11 compute-0 python3.9[210741]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:30:11 compute-0 sudo[210739]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:11 compute-0 sudo[210743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:30:11 compute-0 sudo[210743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:30:11 compute-0 sudo[210743]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:12 compute-0 sudo[210792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:30:12 compute-0 sudo[210792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:30:12 compute-0 sudo[210962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lurixwpacugucvrsqhnrjlhliiosrazu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764844212.0674584-1257-193603672476937/AnsiballZ_edpm_nftables_from_files.py'
Dec 04 10:30:12 compute-0 sudo[210962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:12 compute-0 sudo[210792]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:30:12 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:30:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:30:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:30:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:30:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:30:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:30:12 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:30:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:30:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:30:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:30:12 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:30:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:30:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:30:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:30:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:30:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:30:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:30:12 compute-0 sudo[210977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:30:12 compute-0 sudo[210977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:30:12 compute-0 sudo[210977]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:12 compute-0 python3[210964]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 04 10:30:12 compute-0 sudo[210962]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:12 compute-0 sudo[211002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:30:12 compute-0 sudo[211002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:30:12 compute-0 podman[211115]: 2025-12-04 10:30:12.975735374 +0000 UTC m=+0.041879308 container create f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:30:13 compute-0 systemd[1]: Started libpod-conmon-f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894.scope.
Dec 04 10:30:13 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:30:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:13 compute-0 podman[211115]: 2025-12-04 10:30:12.960386323 +0000 UTC m=+0.026530267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:30:13 compute-0 podman[211115]: 2025-12-04 10:30:13.06078072 +0000 UTC m=+0.126924684 container init f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:30:13 compute-0 podman[211115]: 2025-12-04 10:30:13.068549989 +0000 UTC m=+0.134693913 container start f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:30:13 compute-0 podman[211115]: 2025-12-04 10:30:13.073416853 +0000 UTC m=+0.139560807 container attach f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:30:13 compute-0 systemd[1]: libpod-f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894.scope: Deactivated successfully.
Dec 04 10:30:13 compute-0 quizzical_davinci[211154]: 167 167
Dec 04 10:30:13 compute-0 conmon[211154]: conmon f78af6297ca86d097076 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894.scope/container/memory.events
Dec 04 10:30:13 compute-0 podman[211115]: 2025-12-04 10:30:13.075528457 +0000 UTC m=+0.141672381 container died f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fa842cd248bdd189fa795eefbb82e6719414af03faf0c64ebcb3cee9f0e2226-merged.mount: Deactivated successfully.
Dec 04 10:30:13 compute-0 podman[211115]: 2025-12-04 10:30:13.114835708 +0000 UTC m=+0.180979642 container remove f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 04 10:30:13 compute-0 systemd[1]: libpod-conmon-f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894.scope: Deactivated successfully.
Dec 04 10:30:13 compute-0 sudo[211223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvduvltfeolgjhhnfjguhvalttutwhoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844212.838426-1265-189900713419693/AnsiballZ_stat.py'
Dec 04 10:30:13 compute-0 sudo[211223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:13 compute-0 podman[211228]: 2025-12-04 10:30:13.269776486 +0000 UTC m=+0.042742920 container create 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:30:13 compute-0 systemd[1]: Started libpod-conmon-6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4.scope.
Dec 04 10:30:13 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:13 compute-0 podman[211228]: 2025-12-04 10:30:13.251281685 +0000 UTC m=+0.024248119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:13 compute-0 podman[211228]: 2025-12-04 10:30:13.372415481 +0000 UTC m=+0.145381935 container init 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:30:13 compute-0 podman[211228]: 2025-12-04 10:30:13.380122748 +0000 UTC m=+0.153089182 container start 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:30:13 compute-0 podman[211228]: 2025-12-04 10:30:13.383833962 +0000 UTC m=+0.156800426 container attach 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:30:13 compute-0 python3.9[211236]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:13 compute-0 sudo[211223]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:13 compute-0 ceph-mon[75358]: pgmap v581: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:13 compute-0 sudo[211330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmytjbxwcsvwfxzuqkwplpyhxdwzzept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844212.838426-1265-189900713419693/AnsiballZ_file.py'
Dec 04 10:30:13 compute-0 sudo[211330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:13 compute-0 vibrant_sammet[211245]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:30:13 compute-0 vibrant_sammet[211245]: --> All data devices are unavailable
Dec 04 10:30:13 compute-0 python3.9[211333]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:13 compute-0 systemd[1]: libpod-6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4.scope: Deactivated successfully.
Dec 04 10:30:13 compute-0 sudo[211330]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:13 compute-0 podman[211343]: 2025-12-04 10:30:13.889845406 +0000 UTC m=+0.023247874 container died 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687-merged.mount: Deactivated successfully.
Dec 04 10:30:14 compute-0 podman[211343]: 2025-12-04 10:30:14.039125339 +0000 UTC m=+0.172527797 container remove 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:30:14 compute-0 systemd[1]: libpod-conmon-6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4.scope: Deactivated successfully.
Dec 04 10:30:14 compute-0 sudo[211002]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:14 compute-0 sudo[211434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:30:14 compute-0 sudo[211434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:30:14 compute-0 sudo[211434]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:14 compute-0 sudo[211460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:30:14 compute-0 sudo[211460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:30:14 compute-0 sudo[211557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyetgkyippmgjivmidsbsdjlrtxjqtkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844214.0116904-1277-276416486860029/AnsiballZ_stat.py'
Dec 04 10:30:14 compute-0 sudo[211557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:14 compute-0 podman[211573]: 2025-12-04 10:30:14.467816421 +0000 UTC m=+0.045613152 container create 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:30:14 compute-0 systemd[1]: Started libpod-conmon-4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3.scope.
Dec 04 10:30:14 compute-0 python3.9[211559]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:14 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:30:14 compute-0 podman[211573]: 2025-12-04 10:30:14.449151096 +0000 UTC m=+0.026947867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:30:14 compute-0 podman[211573]: 2025-12-04 10:30:14.550308153 +0000 UTC m=+0.128104904 container init 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 04 10:30:14 compute-0 podman[211573]: 2025-12-04 10:30:14.558679507 +0000 UTC m=+0.136476248 container start 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:30:14 compute-0 podman[211573]: 2025-12-04 10:30:14.562156326 +0000 UTC m=+0.139953057 container attach 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:30:14 compute-0 xenodochial_clarke[211590]: 167 167
Dec 04 10:30:14 compute-0 systemd[1]: libpod-4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3.scope: Deactivated successfully.
Dec 04 10:30:14 compute-0 podman[211573]: 2025-12-04 10:30:14.564378052 +0000 UTC m=+0.142174793 container died 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 04 10:30:14 compute-0 sudo[211557]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2e22aea59047f045c245f7b545d8e9a0fbb5ca07d13de35958dbe3932b6c903-merged.mount: Deactivated successfully.
Dec 04 10:30:14 compute-0 podman[211573]: 2025-12-04 10:30:14.599743394 +0000 UTC m=+0.177540135 container remove 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 04 10:30:14 compute-0 systemd[1]: libpod-conmon-4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3.scope: Deactivated successfully.
Dec 04 10:30:14 compute-0 podman[211663]: 2025-12-04 10:30:14.754580958 +0000 UTC m=+0.044413632 container create 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:30:14 compute-0 sudo[211704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scuvtvcdhcmmbukjywwfoftbyogkmgch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844214.0116904-1277-276416486860029/AnsiballZ_file.py'
Dec 04 10:30:14 compute-0 sudo[211704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:14 compute-0 systemd[1]: Started libpod-conmon-0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9.scope.
Dec 04 10:30:14 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:14 compute-0 podman[211663]: 2025-12-04 10:30:14.73504398 +0000 UTC m=+0.024876684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:30:14 compute-0 podman[211663]: 2025-12-04 10:30:14.84059706 +0000 UTC m=+0.130429764 container init 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:30:14 compute-0 podman[211663]: 2025-12-04 10:30:14.846628754 +0000 UTC m=+0.136461438 container start 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:30:14 compute-0 podman[211663]: 2025-12-04 10:30:14.849795394 +0000 UTC m=+0.139628078 container attach 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:30:15 compute-0 python3.9[211708]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:15 compute-0 sudo[211704]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:15 compute-0 eloquent_curie[211709]: {
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:     "0": [
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:         {
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "devices": [
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "/dev/loop3"
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             ],
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_name": "ceph_lv0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_size": "21470642176",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "name": "ceph_lv0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "tags": {
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.cluster_name": "ceph",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.crush_device_class": "",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.encrypted": "0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.objectstore": "bluestore",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.osd_id": "0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.type": "block",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.vdo": "0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.with_tpm": "0"
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             },
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "type": "block",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "vg_name": "ceph_vg0"
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:         }
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:     ],
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:     "1": [
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:         {
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "devices": [
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "/dev/loop4"
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             ],
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_name": "ceph_lv1",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_size": "21470642176",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "name": "ceph_lv1",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "tags": {
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.cluster_name": "ceph",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.crush_device_class": "",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.encrypted": "0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.objectstore": "bluestore",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.osd_id": "1",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.type": "block",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.vdo": "0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.with_tpm": "0"
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             },
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "type": "block",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "vg_name": "ceph_vg1"
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:         }
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:     ],
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:     "2": [
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:         {
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "devices": [
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "/dev/loop5"
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             ],
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_name": "ceph_lv2",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_size": "21470642176",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "name": "ceph_lv2",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "tags": {
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.cluster_name": "ceph",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.crush_device_class": "",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.encrypted": "0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.objectstore": "bluestore",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.osd_id": "2",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.type": "block",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.vdo": "0",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:                 "ceph.with_tpm": "0"
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             },
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "type": "block",
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:             "vg_name": "ceph_vg2"
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:         }
Dec 04 10:30:15 compute-0 eloquent_curie[211709]:     ]
Dec 04 10:30:15 compute-0 eloquent_curie[211709]: }
Dec 04 10:30:15 compute-0 systemd[1]: libpod-0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9.scope: Deactivated successfully.
Dec 04 10:30:15 compute-0 podman[211663]: 2025-12-04 10:30:15.149976973 +0000 UTC m=+0.439809697 container died 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8-merged.mount: Deactivated successfully.
Dec 04 10:30:15 compute-0 podman[211663]: 2025-12-04 10:30:15.196325744 +0000 UTC m=+0.486158418 container remove 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:30:15 compute-0 systemd[1]: libpod-conmon-0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9.scope: Deactivated successfully.
Dec 04 10:30:15 compute-0 sudo[211460]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:15 compute-0 sudo[211807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:30:15 compute-0 sudo[211807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:30:15 compute-0 sudo[211807]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:15 compute-0 sudo[211855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:30:15 compute-0 sudo[211855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:30:15 compute-0 sudo[211930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sffceuzhuuqqqeoehtzmznfywljolcqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844215.168355-1289-41522888553482/AnsiballZ_stat.py'
Dec 04 10:30:15 compute-0 sudo[211930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:15 compute-0 python3.9[211932]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:15 compute-0 sudo[211930]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:15 compute-0 podman[211945]: 2025-12-04 10:30:15.591700618 +0000 UTC m=+0.021931900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:30:15 compute-0 sudo[212034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzgcipilkbradkmkmtjmtfmmkxodmfat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844215.168355-1289-41522888553482/AnsiballZ_file.py'
Dec 04 10:30:15 compute-0 sudo[212034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:16 compute-0 python3.9[212036]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:16 compute-0 sudo[212034]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:16 compute-0 podman[211945]: 2025-12-04 10:30:16.247633702 +0000 UTC m=+0.677864954 container create ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:30:16 compute-0 ceph-mon[75358]: pgmap v582: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:16 compute-0 systemd[1]: Started libpod-conmon-ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d.scope.
Dec 04 10:30:16 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:30:16 compute-0 podman[211945]: 2025-12-04 10:30:16.341430301 +0000 UTC m=+0.771661593 container init ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:30:16 compute-0 podman[211945]: 2025-12-04 10:30:16.34965915 +0000 UTC m=+0.779890412 container start ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:30:16 compute-0 podman[211945]: 2025-12-04 10:30:16.353291853 +0000 UTC m=+0.783523115 container attach ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:30:16 compute-0 trusting_dijkstra[212109]: 167 167
Dec 04 10:30:16 compute-0 systemd[1]: libpod-ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d.scope: Deactivated successfully.
Dec 04 10:30:16 compute-0 podman[211945]: 2025-12-04 10:30:16.356166496 +0000 UTC m=+0.786397758 container died ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-99a8ba46e8bda43443296e45247d7177d3cbde2f12352d1d704612089e361707-merged.mount: Deactivated successfully.
Dec 04 10:30:16 compute-0 podman[211945]: 2025-12-04 10:30:16.393960729 +0000 UTC m=+0.824191991 container remove ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 04 10:30:16 compute-0 systemd[1]: libpod-conmon-ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d.scope: Deactivated successfully.
Dec 04 10:30:16 compute-0 podman[212186]: 2025-12-04 10:30:16.521448058 +0000 UTC m=+0.021380216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:30:16 compute-0 sudo[212228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzxawafzulnsnmbzwjtrzdrsgcnufrxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844216.232056-1301-84900815639467/AnsiballZ_stat.py'
Dec 04 10:30:16 compute-0 sudo[212228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:16 compute-0 podman[212186]: 2025-12-04 10:30:16.735774378 +0000 UTC m=+0.235706516 container create daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:30:16 compute-0 systemd[1]: Started libpod-conmon-daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751.scope.
Dec 04 10:30:16 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:30:16 compute-0 podman[212186]: 2025-12-04 10:30:16.841513673 +0000 UTC m=+0.341445831 container init daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:30:16 compute-0 podman[212186]: 2025-12-04 10:30:16.849961328 +0000 UTC m=+0.349893466 container start daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:30:16 compute-0 podman[212186]: 2025-12-04 10:30:16.85435321 +0000 UTC m=+0.354285368 container attach daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:30:16 compute-0 python3.9[212230]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:16 compute-0 sudo[212228]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:17 compute-0 sudo[212324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjliogqkofkuydqlftlgfjdayvwiozcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844216.232056-1301-84900815639467/AnsiballZ_file.py'
Dec 04 10:30:17 compute-0 sudo[212324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:17 compute-0 ceph-mon[75358]: pgmap v583: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:17 compute-0 python3.9[212326]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:17 compute-0 sudo[212324]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:17 compute-0 lvm[212438]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:30:17 compute-0 lvm[212437]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:30:17 compute-0 lvm[212438]: VG ceph_vg1 finished
Dec 04 10:30:17 compute-0 lvm[212437]: VG ceph_vg0 finished
Dec 04 10:30:17 compute-0 lvm[212440]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:30:17 compute-0 lvm[212440]: VG ceph_vg2 finished
Dec 04 10:30:17 compute-0 lucid_antonelli[212234]: {}
Dec 04 10:30:17 compute-0 systemd[1]: libpod-daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751.scope: Deactivated successfully.
Dec 04 10:30:17 compute-0 systemd[1]: libpod-daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751.scope: Consumed 1.274s CPU time.
Dec 04 10:30:17 compute-0 podman[212186]: 2025-12-04 10:30:17.62907943 +0000 UTC m=+1.129011568 container died daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:30:17 compute-0 sudo[212556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-astfirzsiwerzpcfdyfbrxbdkbdbfiag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844217.4834547-1313-228323929181990/AnsiballZ_stat.py'
Dec 04 10:30:17 compute-0 sudo[212556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:17 compute-0 python3.9[212558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:18 compute-0 sudo[212556]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:18 compute-0 sudo[212681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfxgwcsisyrpkauzkqvaxdqiurfftcjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844217.4834547-1313-228323929181990/AnsiballZ_copy.py'
Dec 04 10:30:18 compute-0 sudo[212681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd-merged.mount: Deactivated successfully.
Dec 04 10:30:18 compute-0 podman[212186]: 2025-12-04 10:30:18.501009885 +0000 UTC m=+2.000942023 container remove daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 04 10:30:18 compute-0 systemd[1]: libpod-conmon-daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751.scope: Deactivated successfully.
Dec 04 10:30:18 compute-0 sudo[211855]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:30:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:30:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:30:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:30:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:18 compute-0 python3.9[212683]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844217.4834547-1313-228323929181990/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:18 compute-0 sudo[212681]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:18 compute-0 sudo[212685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:30:18 compute-0 sudo[212685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:30:18 compute-0 sudo[212685]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:19 compute-0 sudo[212859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avwcepneeqkzirlgjjzlnlvgtcmprsza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844218.7700884-1328-159513604832592/AnsiballZ_file.py'
Dec 04 10:30:19 compute-0 sudo[212859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:19 compute-0 python3.9[212861]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:19 compute-0 sudo[212859]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:19 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:30:19 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:30:19 compute-0 ceph-mon[75358]: pgmap v584: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:19 compute-0 sudo[213011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxicjxdoeefvzvaquzfgrdcpoyushwxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844219.3671904-1336-105598582004124/AnsiballZ_command.py'
Dec 04 10:30:19 compute-0 sudo[213011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:19 compute-0 python3.9[213013]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:30:19 compute-0 sudo[213011]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:20 compute-0 sudo[213166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evmtuqkekqdbrukxknwfdyzfzkybakeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844219.9687867-1344-142538806553510/AnsiballZ_blockinfile.py'
Dec 04 10:30:20 compute-0 sudo[213166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:20 compute-0 python3.9[213168]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:20 compute-0 sudo[213166]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:21 compute-0 sshd-session[212200]: Invalid user syncthing from 101.47.163.20 port 46330
Dec 04 10:30:21 compute-0 sudo[213318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbpubarlchgpqqzdqaotggvhrpftsusl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844220.8258002-1353-152563578515867/AnsiballZ_command.py'
Dec 04 10:30:21 compute-0 sudo[213318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:21 compute-0 python3.9[213320]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:30:21 compute-0 sshd-session[212200]: Received disconnect from 101.47.163.20 port 46330:11: Bye Bye [preauth]
Dec 04 10:30:21 compute-0 sshd-session[212200]: Disconnected from invalid user syncthing 101.47.163.20 port 46330 [preauth]
Dec 04 10:30:21 compute-0 sudo[213318]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:21 compute-0 sudo[213471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmalwsfijdyjodgghshljnktsjdkknny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844221.5499575-1361-271349400976883/AnsiballZ_stat.py'
Dec 04 10:30:21 compute-0 sudo[213471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:21 compute-0 python3.9[213473]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:30:22 compute-0 sudo[213471]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:22 compute-0 ceph-mon[75358]: pgmap v585: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:22 compute-0 sudo[213625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvbrqzpuqlricmanpjbsnpexhcpepfms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844222.1505609-1369-131883665871832/AnsiballZ_command.py'
Dec 04 10:30:22 compute-0 sudo[213625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:22 compute-0 python3.9[213627]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:30:22 compute-0 sudo[213625]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:23 compute-0 sudo[213780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fugsdharpcfsafciaaqrehqcwqxwqwav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844222.8057694-1377-274925336793629/AnsiballZ_file.py'
Dec 04 10:30:23 compute-0 sudo[213780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:23 compute-0 python3.9[213782]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:23 compute-0 sudo[213780]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:23 compute-0 sudo[213932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiijqefeoigzimeatqwcccqtfsgugiwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844223.4558349-1385-188262302337939/AnsiballZ_stat.py'
Dec 04 10:30:23 compute-0 sudo[213932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:23 compute-0 python3.9[213934]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:23 compute-0 sudo[213932]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:24 compute-0 ceph-mon[75358]: pgmap v586: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:24 compute-0 sudo[214055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldjsqchiffpnxqkkiqewbrqvtehsnxrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844223.4558349-1385-188262302337939/AnsiballZ_copy.py'
Dec 04 10:30:24 compute-0 sudo[214055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:24 compute-0 python3.9[214057]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844223.4558349-1385-188262302337939/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:24 compute-0 sudo[214055]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:24 compute-0 sudo[214217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yggvsxigcdipjawsmgnhzptjleserpji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844224.5838633-1400-191982531484134/AnsiballZ_stat.py'
Dec 04 10:30:24 compute-0 sudo[214217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:24 compute-0 podman[214181]: 2025-12-04 10:30:24.953134196 +0000 UTC m=+0.090422661 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 04 10:30:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:25 compute-0 python3.9[214227]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:25 compute-0 sudo[214217]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:25 compute-0 sudo[214356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haymjzfyyhxaxabwtreguuezgbiyonsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844224.5838633-1400-191982531484134/AnsiballZ_copy.py'
Dec 04 10:30:25 compute-0 sudo[214356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:25 compute-0 python3.9[214358]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844224.5838633-1400-191982531484134/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:25 compute-0 sudo[214356]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:26 compute-0 podman[214482]: 2025-12-04 10:30:26.186250394 +0000 UTC m=+0.057793647 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:30:26 compute-0 ceph-mon[75358]: pgmap v587: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:26 compute-0 sudo[214526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqnolrymxjpqprtquscqovfewodymziy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844225.9014478-1415-146146697725042/AnsiballZ_stat.py'
Dec 04 10:30:26 compute-0 sudo[214526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:26 compute-0 python3.9[214529]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:26 compute-0 sudo[214526]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:30:26
Dec 04 10:30:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:30:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:30:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Dec 04 10:30:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:30:26 compute-0 sudo[214650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmjmamlgcdqrbdumdrhngymwrwlgdkgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844225.9014478-1415-146146697725042/AnsiballZ_copy.py'
Dec 04 10:30:26 compute-0 sudo[214650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:26 compute-0 python3.9[214652]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844225.9014478-1415-146146697725042/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:26 compute-0 sudo[214650]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:27 compute-0 sudo[214802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeilayitwfyspyjscjrcboknvjtzqndp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844227.126633-1430-227866470829411/AnsiballZ_systemd.py'
Dec 04 10:30:27 compute-0 sudo[214802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:27 compute-0 python3.9[214804]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:30:27 compute-0 systemd[1]: Reloading.
Dec 04 10:30:27 compute-0 systemd-rc-local-generator[214831]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:30:27 compute-0 systemd-sysv-generator[214835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:30:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:30:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:30:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:30:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:30:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:30:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:30:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:30:28 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 04 10:30:28 compute-0 sudo[214802]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:28 compute-0 ceph-mon[75358]: pgmap v588: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:28 compute-0 sudo[214993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrfkikgxxpvdcpgjefkxpbzbfjqjilpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844228.335515-1438-148444489599055/AnsiballZ_systemd.py'
Dec 04 10:30:28 compute-0 sudo[214993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:28 compute-0 python3.9[214995]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 04 10:30:28 compute-0 systemd[1]: Reloading.
Dec 04 10:30:29 compute-0 systemd-rc-local-generator[215018]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:30:29 compute-0 systemd-sysv-generator[215023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:30:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:29 compute-0 systemd[1]: Reloading.
Dec 04 10:30:29 compute-0 systemd-rc-local-generator[215058]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:30:29 compute-0 systemd-sysv-generator[215061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:30:29 compute-0 sudo[214993]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:30 compute-0 sshd-session[156334]: Connection closed by 192.168.122.30 port 51686
Dec 04 10:30:30 compute-0 sshd-session[156331]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:30:30 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec 04 10:30:30 compute-0 systemd[1]: session-49.scope: Consumed 3min 28.019s CPU time.
Dec 04 10:30:30 compute-0 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Dec 04 10:30:30 compute-0 systemd-logind[798]: Removed session 49.
Dec 04 10:30:30 compute-0 ceph-mon[75358]: pgmap v589: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:31 compute-0 ceph-mon[75358]: pgmap v590: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:33 compute-0 ceph-mon[75358]: pgmap v591: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:35 compute-0 sshd-session[215090]: Accepted publickey for zuul from 192.168.122.30 port 49164 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:30:35 compute-0 systemd-logind[798]: New session 50 of user zuul.
Dec 04 10:30:35 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 04 10:30:35 compute-0 sshd-session[215090]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:30:36 compute-0 ceph-mon[75358]: pgmap v592: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:36 compute-0 python3.9[215243]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:30:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:30:37 compute-0 python3.9[215397]: ansible-ansible.builtin.service_facts Invoked
Dec 04 10:30:37 compute-0 network[215416]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 04 10:30:37 compute-0 network[215417]: 'network-scripts' will be removed from distribution in near future.
Dec 04 10:30:37 compute-0 network[215418]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 04 10:30:38 compute-0 ceph-mon[75358]: pgmap v593: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:38 compute-0 sshd-session[215398]: Invalid user int from 103.179.218.243 port 42222
Dec 04 10:30:38 compute-0 sshd-session[215398]: Received disconnect from 103.179.218.243 port 42222:11: Bye Bye [preauth]
Dec 04 10:30:38 compute-0 sshd-session[215398]: Disconnected from invalid user int 103.179.218.243 port 42222 [preauth]
Dec 04 10:30:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:40 compute-0 ceph-mon[75358]: pgmap v594: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:41 compute-0 ceph-mon[75358]: pgmap v595: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:42 compute-0 sudo[215689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpkcqxgwkndtllfmatokeiaxzgspucbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844241.7872698-47-157128393372706/AnsiballZ_setup.py'
Dec 04 10:30:42 compute-0 sudo[215689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:42 compute-0 python3.9[215691]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 04 10:30:42 compute-0 sudo[215689]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:43 compute-0 sudo[215773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stnyvpwbkvedvucnxohvzvdadlrbhsza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844241.7872698-47-157128393372706/AnsiballZ_dnf.py'
Dec 04 10:30:43 compute-0 sudo[215773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:43 compute-0 python3.9[215775]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:30:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:44 compute-0 ceph-mon[75358]: pgmap v596: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:45 compute-0 ceph-mon[75358]: pgmap v597: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:46 compute-0 sshd-session[215779]: Invalid user cgpexpert from 74.249.218.27 port 37420
Dec 04 10:30:46 compute-0 sshd-session[215779]: Received disconnect from 74.249.218.27 port 37420:11: Bye Bye [preauth]
Dec 04 10:30:46 compute-0 sshd-session[215779]: Disconnected from invalid user cgpexpert 74.249.218.27 port 37420 [preauth]
Dec 04 10:30:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:47 compute-0 sshd-session[215777]: Invalid user astra from 103.149.86.230 port 37178
Dec 04 10:30:47 compute-0 sshd-session[215777]: Received disconnect from 103.149.86.230 port 37178:11: Bye Bye [preauth]
Dec 04 10:30:47 compute-0 sshd-session[215777]: Disconnected from invalid user astra 103.149.86.230 port 37178 [preauth]
Dec 04 10:30:48 compute-0 ceph-mon[75358]: pgmap v598: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:49 compute-0 sudo[215773]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:49 compute-0 sudo[215930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hszinydikdtltidxtqjkhyugsozemxee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844249.359341-59-121831854957694/AnsiballZ_stat.py'
Dec 04 10:30:49 compute-0 sudo[215930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:49 compute-0 python3.9[215932]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:30:49 compute-0 sudo[215930]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:50 compute-0 ceph-mon[75358]: pgmap v599: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:50 compute-0 sudo[216082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtifuzntbkbtscvjqxmrnjzrirlmhbyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844250.1576407-69-84560681476733/AnsiballZ_command.py'
Dec 04 10:30:50 compute-0 sudo[216082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:50 compute-0 python3.9[216084]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:30:50 compute-0 sudo[216082]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:51 compute-0 sudo[216235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfidngnzgkztrltunoddsheatnecdivw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844251.03199-79-266551631866090/AnsiballZ_stat.py'
Dec 04 10:30:51 compute-0 sudo[216235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:51 compute-0 python3.9[216237]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:30:51 compute-0 sudo[216235]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:52 compute-0 sudo[216387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drplwbnwienodkofxbaywjzrxrxycapw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844251.6890519-87-249564766553898/AnsiballZ_command.py'
Dec 04 10:30:52 compute-0 sudo[216387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:52 compute-0 ceph-mon[75358]: pgmap v600: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:52 compute-0 python3.9[216389]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:30:52 compute-0 sudo[216387]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:52 compute-0 sudo[216540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atrbqkxefcnvgazmncquvurrrwcjhawj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844252.4737446-95-225401281664202/AnsiballZ_stat.py'
Dec 04 10:30:52 compute-0 sudo[216540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:53 compute-0 python3.9[216542]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:30:53 compute-0 sudo[216540]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:53 compute-0 sudo[216663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbukgmcfwxqyxoxgkcvmwkvdmgjhgof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844252.4737446-95-225401281664202/AnsiballZ_copy.py'
Dec 04 10:30:53 compute-0 sudo[216663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:53 compute-0 python3.9[216665]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844252.4737446-95-225401281664202/.source.iscsi _original_basename=.gpr_it4z follow=False checksum=a94c711a5f59472c43c3025afd5714c35f3718f9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:53 compute-0 sudo[216663]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:54 compute-0 ceph-mon[75358]: pgmap v601: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:54 compute-0 sudo[216815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poczfqcbbzmawsulczmrambnddprazbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844253.9822621-110-74447283510639/AnsiballZ_file.py'
Dec 04 10:30:54 compute-0 sudo[216815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:54 compute-0 python3.9[216817]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:54 compute-0 sudo[216815]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:30:54.896 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:30:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:30:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:30:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:30:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:30:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:55 compute-0 sudo[216977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgorgbzgnkovrghverpkwaczoyeeyenb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844254.7714498-118-136994000582866/AnsiballZ_lineinfile.py'
Dec 04 10:30:55 compute-0 sudo[216977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:55 compute-0 ceph-mon[75358]: pgmap v602: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:55 compute-0 podman[216941]: 2025-12-04 10:30:55.287281757 +0000 UTC m=+0.098247092 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 04 10:30:55 compute-0 python3.9[216987]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:30:55 compute-0 sudo[216977]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:56 compute-0 sudo[217145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmwlhtbghcrxhuohuvqodzgujyrsuiws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844255.6554248-127-183719590804069/AnsiballZ_systemd_service.py'
Dec 04 10:30:56 compute-0 sudo[217145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:56 compute-0 podman[217147]: 2025-12-04 10:30:56.335787211 +0000 UTC m=+0.053201235 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:30:56 compute-0 python3.9[217148]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:30:56 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 04 10:30:56 compute-0 sudo[217145]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:57 compute-0 sudo[217320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiwwdsmrpyumeqbagmlbwjwfgqmuosvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844256.906105-135-54264239185173/AnsiballZ_systemd_service.py'
Dec 04 10:30:57 compute-0 sudo[217320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:57 compute-0 python3.9[217322]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:30:57 compute-0 systemd[1]: Reloading.
Dec 04 10:30:57 compute-0 systemd-rc-local-generator[217352]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:30:57 compute-0 systemd-sysv-generator[217355]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:30:57 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 04 10:30:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:30:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:30:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:30:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:30:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:30:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:30:57 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 04 10:30:57 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 04 10:30:57 compute-0 systemd[1]: Started Open-iSCSI.
Dec 04 10:30:57 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 04 10:30:57 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 04 10:30:58 compute-0 sudo[217320]: pam_unix(sudo:session): session closed for user root
Dec 04 10:30:58 compute-0 ceph-mon[75358]: pgmap v603: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:30:58 compute-0 sudo[217520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjppjqxleknqxultvdamplnzmdmiyzgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844258.318-146-31699841036577/AnsiballZ_service_facts.py'
Dec 04 10:30:58 compute-0 sudo[217520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:30:58 compute-0 python3.9[217522]: ansible-ansible.builtin.service_facts Invoked
Dec 04 10:30:58 compute-0 network[217539]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 04 10:30:58 compute-0 network[217540]: 'network-scripts' will be removed from distribution in near future.
Dec 04 10:30:58 compute-0 network[217541]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 04 10:30:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:30:59 compute-0 sshd-session[217545]: Invalid user mega from 217.154.62.22 port 51344
Dec 04 10:30:59 compute-0 sshd-session[217545]: Received disconnect from 217.154.62.22 port 51344:11: Bye Bye [preauth]
Dec 04 10:30:59 compute-0 sshd-session[217545]: Disconnected from invalid user mega 217.154.62.22 port 51344 [preauth]
Dec 04 10:31:00 compute-0 ceph-mon[75358]: pgmap v604: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:02 compute-0 sudo[217520]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:02 compute-0 ceph-mon[75358]: pgmap v605: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:02 compute-0 sudo[217813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txnpnwwgsclpvtwnhjsshfjqnydazsak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844262.377938-156-5659175052002/AnsiballZ_file.py'
Dec 04 10:31:02 compute-0 sudo[217813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:02 compute-0 python3.9[217815]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 04 10:31:02 compute-0 sudo[217813]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:03 compute-0 ceph-mon[75358]: pgmap v606: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:03 compute-0 sudo[217965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztioecnjqpirobqnvgsvfbqmwvarujrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844263.090992-164-3505485364018/AnsiballZ_modprobe.py'
Dec 04 10:31:03 compute-0 sudo[217965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:03 compute-0 python3.9[217967]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 04 10:31:03 compute-0 sudo[217965]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:04 compute-0 sudo[218121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdtdmzrcshklvjpmhashlcvonvsohmfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844263.9778721-172-131977077771746/AnsiballZ_stat.py'
Dec 04 10:31:04 compute-0 sudo[218121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:04 compute-0 python3.9[218123]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:04 compute-0 sudo[218121]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:04 compute-0 sudo[218244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxbdrmxuldtmfvfhxfgntvxegoqgcxtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844263.9778721-172-131977077771746/AnsiballZ_copy.py'
Dec 04 10:31:04 compute-0 sudo[218244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:05 compute-0 python3.9[218246]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844263.9778721-172-131977077771746/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:05 compute-0 sudo[218244]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:05 compute-0 sudo[218396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-allzwnbbevhwiwyrtcegtqpqpbdppfce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844265.2456203-188-70101759074932/AnsiballZ_lineinfile.py'
Dec 04 10:31:05 compute-0 sudo[218396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:05 compute-0 python3.9[218398]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:05 compute-0 sudo[218396]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:06 compute-0 ceph-mon[75358]: pgmap v607: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:06 compute-0 sudo[218548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqrgfayoabdzppvxfsvvntkhnfoeyvlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844266.1845398-196-219067067025417/AnsiballZ_systemd.py'
Dec 04 10:31:06 compute-0 sudo[218548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:07 compute-0 python3.9[218550]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:31:07 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 04 10:31:07 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 04 10:31:07 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 04 10:31:07 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 04 10:31:07 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 04 10:31:07 compute-0 sudo[218548]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:07 compute-0 sudo[218704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mekqspudeywcftcrkzsxwkgjnlaggvit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844267.3648303-204-23634822056811/AnsiballZ_file.py'
Dec 04 10:31:07 compute-0 sudo[218704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:07 compute-0 python3.9[218706]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:31:07 compute-0 sudo[218704]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:08 compute-0 ceph-mon[75358]: pgmap v608: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:08 compute-0 sudo[218856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohkhgizyodkatlkwqwlcomzshphmivqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844268.1794248-213-134231724551089/AnsiballZ_stat.py'
Dec 04 10:31:08 compute-0 sudo[218856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:08 compute-0 python3.9[218858]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:31:08 compute-0 sudo[218856]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:09 compute-0 sudo[219008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilrqkezblglxpkefxrqntajaornzpvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844268.8782027-222-5534732007341/AnsiballZ_stat.py'
Dec 04 10:31:09 compute-0 sudo[219008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:09 compute-0 python3.9[219010]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:31:09 compute-0 sudo[219008]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:09 compute-0 sudo[219160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohgfdffripabeqjuwctlkbmwjqlsgpic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844269.6544032-230-205082806825033/AnsiballZ_stat.py'
Dec 04 10:31:09 compute-0 sudo[219160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:10 compute-0 python3.9[219162]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:10 compute-0 sudo[219160]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:10 compute-0 ceph-mon[75358]: pgmap v609: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:10 compute-0 sudo[219283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vehtndxuyjezafynsopzkzphhusccrjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844269.6544032-230-205082806825033/AnsiballZ_copy.py'
Dec 04 10:31:10 compute-0 sudo[219283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:10 compute-0 python3.9[219285]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844269.6544032-230-205082806825033/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:10 compute-0 sudo[219283]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:11 compute-0 sudo[219435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-menplqviecyppwxslokjwnlkeechsbkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844270.8893893-245-33040737856575/AnsiballZ_command.py'
Dec 04 10:31:11 compute-0 sudo[219435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:11 compute-0 python3.9[219437]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:31:11 compute-0 sudo[219435]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:11 compute-0 sudo[219588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwkyenyzbbssvxyqrvbzcmycmtimhjcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844271.6109767-253-2498625241128/AnsiballZ_lineinfile.py'
Dec 04 10:31:11 compute-0 sudo[219588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:12 compute-0 python3.9[219590]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:12 compute-0 sudo[219588]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:12 compute-0 ceph-mon[75358]: pgmap v610: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:12 compute-0 sudo[219740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euakyzmxooaaqjkdjxgbeuhlyvyptjgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844272.274636-261-82771911404141/AnsiballZ_replace.py'
Dec 04 10:31:12 compute-0 sudo[219740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:12 compute-0 python3.9[219742]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:12 compute-0 sudo[219740]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:13 compute-0 ceph-mon[75358]: pgmap v611: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:13 compute-0 sudo[219892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjnxuypmbglztjjymrvjiqwtbeujkqoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844273.158124-269-253258058016706/AnsiballZ_replace.py'
Dec 04 10:31:13 compute-0 sudo[219892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:13 compute-0 python3.9[219894]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:13 compute-0 sudo[219892]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:14 compute-0 sudo[220044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvqizcsscgngynxyqegwsyypwylfahbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844273.8610709-278-82428152506688/AnsiballZ_lineinfile.py'
Dec 04 10:31:14 compute-0 sudo[220044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:14 compute-0 python3.9[220046]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:14 compute-0 sudo[220044]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:14 compute-0 sudo[220196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjnjwzolyeaclkqtbwqwqodrsmwtctah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844274.4871364-278-3459162637609/AnsiballZ_lineinfile.py'
Dec 04 10:31:14 compute-0 sudo[220196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:14 compute-0 python3.9[220198]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:15 compute-0 sudo[220196]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:15 compute-0 sudo[220348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojhvniiamonxtytpjzaendapwiwmejhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844275.1585479-278-153831693093906/AnsiballZ_lineinfile.py'
Dec 04 10:31:15 compute-0 sudo[220348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:15 compute-0 python3.9[220350]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:15 compute-0 sudo[220348]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:16 compute-0 ceph-mon[75358]: pgmap v612: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:16 compute-0 sudo[220500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szanczsbksmixnlsejwixurvnhhuvsxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844275.854878-278-236354386830502/AnsiballZ_lineinfile.py'
Dec 04 10:31:16 compute-0 sudo[220500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:16 compute-0 python3.9[220502]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:16 compute-0 sudo[220500]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:16 compute-0 sudo[220652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvmqnvdzfeuifzhjkgznmqqtscirxjql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844276.5837467-307-231401450130560/AnsiballZ_stat.py'
Dec 04 10:31:16 compute-0 sudo[220652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:17 compute-0 python3.9[220654]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:31:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:17 compute-0 sudo[220652]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:17 compute-0 sshd-session[220732]: Invalid user radarr from 107.175.213.239 port 34728
Dec 04 10:31:17 compute-0 sshd-session[220732]: Received disconnect from 107.175.213.239 port 34728:11: Bye Bye [preauth]
Dec 04 10:31:17 compute-0 sshd-session[220732]: Disconnected from invalid user radarr 107.175.213.239 port 34728 [preauth]
Dec 04 10:31:17 compute-0 sudo[220808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmmywrnqwkzwekmiaehzaoutvoyaufea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844277.2713256-315-90253475500497/AnsiballZ_file.py'
Dec 04 10:31:17 compute-0 sudo[220808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:17 compute-0 python3.9[220810]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:17 compute-0 sudo[220808]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:18 compute-0 ceph-mon[75358]: pgmap v613: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:18 compute-0 sudo[220960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxqiokwugirvkkhqtdlozhfduhlmhkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844277.9856234-324-170780616319209/AnsiballZ_file.py'
Dec 04 10:31:18 compute-0 sudo[220960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:18 compute-0 python3.9[220962]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:31:18 compute-0 sudo[220960]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:18 compute-0 sudo[221039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:31:18 compute-0 sudo[221039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:31:18 compute-0 sudo[221039]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:18 compute-0 sudo[221076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:31:18 compute-0 sudo[221076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:31:18 compute-0 sudo[221162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjthftknukximageuuwkwbhtcrdbuhjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844278.592459-332-174970156626123/AnsiballZ_stat.py'
Dec 04 10:31:18 compute-0 sudo[221162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:19 compute-0 python3.9[221164]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:19 compute-0 sudo[221162]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:19 compute-0 sudo[221271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqfkaokxezkpnntflerprqhpfuxqaqog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844278.592459-332-174970156626123/AnsiballZ_file.py'
Dec 04 10:31:19 compute-0 sudo[221271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:19 compute-0 sudo[221076]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:31:19 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:31:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:31:19 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:31:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:31:19 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:31:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:31:19 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:31:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:31:19 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:31:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:31:19 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:31:19 compute-0 sudo[221274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:31:19 compute-0 sudo[221274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:31:19 compute-0 sudo[221274]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:19 compute-0 sudo[221299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:31:19 compute-0 sudo[221299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:31:19 compute-0 python3.9[221273]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:31:19 compute-0 sudo[221271]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:19 compute-0 podman[221388]: 2025-12-04 10:31:19.721084153 +0000 UTC m=+0.048012179 container create ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:31:19 compute-0 systemd[1]: Started libpod-conmon-ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f.scope.
Dec 04 10:31:19 compute-0 podman[221388]: 2025-12-04 10:31:19.698929343 +0000 UTC m=+0.025857419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:31:19 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:31:19 compute-0 podman[221388]: 2025-12-04 10:31:19.82243893 +0000 UTC m=+0.149366986 container init ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:31:19 compute-0 podman[221388]: 2025-12-04 10:31:19.830629818 +0000 UTC m=+0.157557844 container start ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:31:19 compute-0 goofy_tesla[221452]: 167 167
Dec 04 10:31:19 compute-0 systemd[1]: libpod-ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f.scope: Deactivated successfully.
Dec 04 10:31:19 compute-0 sudo[221517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlcdviyoyzxnabummxrwwfdpwoocnfrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844279.6297195-332-154080841864007/AnsiballZ_stat.py'
Dec 04 10:31:19 compute-0 sudo[221517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:19 compute-0 podman[221388]: 2025-12-04 10:31:19.966392212 +0000 UTC m=+0.293320328 container attach ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:31:19 compute-0 podman[221388]: 2025-12-04 10:31:19.966969237 +0000 UTC m=+0.293897293 container died ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:31:20 compute-0 python3.9[221519]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:20 compute-0 sudo[221517]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:20 compute-0 sudo[221595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeeuactnsvgczpaowhzxwclatifpdqce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844279.6297195-332-154080841864007/AnsiballZ_file.py'
Dec 04 10:31:20 compute-0 sudo[221595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:20 compute-0 python3.9[221598]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:31:20 compute-0 sudo[221595]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:20 compute-0 ceph-mon[75358]: pgmap v614: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:20 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:31:20 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:31:20 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:31:20 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:31:20 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:31:20 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:31:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f459fb76c30b12303870fa4bc77d8bbc44a51ad15d967f8eec075e964ba98dd2-merged.mount: Deactivated successfully.
Dec 04 10:31:20 compute-0 podman[221388]: 2025-12-04 10:31:20.555389106 +0000 UTC m=+0.882317142 container remove ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:31:20 compute-0 systemd[1]: libpod-conmon-ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f.scope: Deactivated successfully.
Dec 04 10:31:20 compute-0 podman[221654]: 2025-12-04 10:31:20.745917752 +0000 UTC m=+0.050780837 container create c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:31:20 compute-0 systemd[1]: Started libpod-conmon-c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748.scope.
Dec 04 10:31:20 compute-0 podman[221654]: 2025-12-04 10:31:20.7252898 +0000 UTC m=+0.030152895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:31:20 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:20 compute-0 podman[221654]: 2025-12-04 10:31:20.900196396 +0000 UTC m=+0.205059491 container init c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:31:20 compute-0 podman[221654]: 2025-12-04 10:31:20.910943158 +0000 UTC m=+0.215806233 container start c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:31:20 compute-0 podman[221654]: 2025-12-04 10:31:20.915918089 +0000 UTC m=+0.220781194 container attach c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:31:20 compute-0 sudo[221778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flddvjpiaykjtymbjvqihgegwwuzbbtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844280.6566741-355-159887109773865/AnsiballZ_file.py'
Dec 04 10:31:20 compute-0 sudo[221778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:21 compute-0 python3.9[221780]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:21 compute-0 sudo[221778]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:21 compute-0 thirsty_lederberg[221723]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:31:21 compute-0 thirsty_lederberg[221723]: --> All data devices are unavailable
Dec 04 10:31:21 compute-0 systemd[1]: libpod-c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748.scope: Deactivated successfully.
Dec 04 10:31:21 compute-0 podman[221654]: 2025-12-04 10:31:21.405255417 +0000 UTC m=+0.710118522 container died c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f-merged.mount: Deactivated successfully.
Dec 04 10:31:21 compute-0 ceph-mon[75358]: pgmap v615: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:21 compute-0 podman[221654]: 2025-12-04 10:31:21.569188706 +0000 UTC m=+0.874051781 container remove c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:31:21 compute-0 systemd[1]: libpod-conmon-c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748.scope: Deactivated successfully.
Dec 04 10:31:21 compute-0 sudo[221959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-durizellshtipgnjbdbejbiaquynfpvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844281.3074768-363-222639590663249/AnsiballZ_stat.py'
Dec 04 10:31:21 compute-0 sudo[221299]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:21 compute-0 sudo[221959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:21 compute-0 sudo[221962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:31:21 compute-0 sudo[221962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:31:21 compute-0 sudo[221962]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:21 compute-0 sudo[221987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:31:21 compute-0 sudo[221987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:31:21 compute-0 python3.9[221961]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:21 compute-0 sudo[221959]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:22 compute-0 sudo[222113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xveyaapcwfrorqmbwylmtkcfixrbglwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844281.3074768-363-222639590663249/AnsiballZ_file.py'
Dec 04 10:31:22 compute-0 sudo[222113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:22 compute-0 podman[222049]: 2025-12-04 10:31:22.004660703 +0000 UTC m=+0.024991379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:31:22 compute-0 podman[222049]: 2025-12-04 10:31:22.14750858 +0000 UTC m=+0.167839276 container create a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:31:22 compute-0 python3.9[222115]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:22 compute-0 systemd[1]: Started libpod-conmon-a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86.scope.
Dec 04 10:31:22 compute-0 sudo[222113]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:22 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:31:22 compute-0 podman[222049]: 2025-12-04 10:31:22.330451162 +0000 UTC m=+0.350781828 container init a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 04 10:31:22 compute-0 podman[222049]: 2025-12-04 10:31:22.33980365 +0000 UTC m=+0.360134316 container start a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 04 10:31:22 compute-0 podman[222049]: 2025-12-04 10:31:22.344446502 +0000 UTC m=+0.364777168 container attach a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:31:22 compute-0 loving_hellman[222118]: 167 167
Dec 04 10:31:22 compute-0 systemd[1]: libpod-a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86.scope: Deactivated successfully.
Dec 04 10:31:22 compute-0 podman[222049]: 2025-12-04 10:31:22.345449507 +0000 UTC m=+0.365780173 container died a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:31:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-704a924a311c7ef4cdf52b4584ecdb1c2acbfbc91161118ce05740ec510ce183-merged.mount: Deactivated successfully.
Dec 04 10:31:22 compute-0 podman[222049]: 2025-12-04 10:31:22.458706222 +0000 UTC m=+0.479036888 container remove a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:31:22 compute-0 systemd[1]: libpod-conmon-a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86.scope: Deactivated successfully.
Dec 04 10:31:22 compute-0 podman[222227]: 2025-12-04 10:31:22.671892931 +0000 UTC m=+0.103407948 container create a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:31:22 compute-0 podman[222227]: 2025-12-04 10:31:22.591191377 +0000 UTC m=+0.022706424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:31:22 compute-0 systemd[1]: Started libpod-conmon-a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be.scope.
Dec 04 10:31:22 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:31:22 compute-0 sudo[222310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzbuquluuoxzovfgweupwqewszdsqjjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844282.4551342-375-241304423602240/AnsiballZ_stat.py'
Dec 04 10:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:22 compute-0 sudo[222310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:22 compute-0 podman[222227]: 2025-12-04 10:31:22.754960642 +0000 UTC m=+0.186475689 container init a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:31:22 compute-0 podman[222227]: 2025-12-04 10:31:22.763170342 +0000 UTC m=+0.194685369 container start a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:31:22 compute-0 podman[222227]: 2025-12-04 10:31:22.802835857 +0000 UTC m=+0.234350884 container attach a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:31:22 compute-0 python3.9[222314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:22 compute-0 sudo[222310]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:23 compute-0 affectionate_nash[222309]: {
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:     "0": [
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:         {
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "devices": [
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "/dev/loop3"
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             ],
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_name": "ceph_lv0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_size": "21470642176",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "name": "ceph_lv0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "tags": {
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.cluster_name": "ceph",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.crush_device_class": "",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.encrypted": "0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.objectstore": "bluestore",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.osd_id": "0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.type": "block",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.vdo": "0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.with_tpm": "0"
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             },
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "type": "block",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "vg_name": "ceph_vg0"
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:         }
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:     ],
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:     "1": [
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:         {
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "devices": [
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "/dev/loop4"
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             ],
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_name": "ceph_lv1",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_size": "21470642176",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "name": "ceph_lv1",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "tags": {
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.cluster_name": "ceph",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.crush_device_class": "",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.encrypted": "0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.objectstore": "bluestore",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.osd_id": "1",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.type": "block",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.vdo": "0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.with_tpm": "0"
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             },
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "type": "block",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "vg_name": "ceph_vg1"
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:         }
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:     ],
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:     "2": [
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:         {
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "devices": [
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "/dev/loop5"
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             ],
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_name": "ceph_lv2",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_size": "21470642176",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "name": "ceph_lv2",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "tags": {
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.cluster_name": "ceph",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.crush_device_class": "",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.encrypted": "0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.objectstore": "bluestore",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.osd_id": "2",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.type": "block",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.vdo": "0",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:                 "ceph.with_tpm": "0"
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             },
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "type": "block",
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:             "vg_name": "ceph_vg2"
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:         }
Dec 04 10:31:23 compute-0 affectionate_nash[222309]:     ]
Dec 04 10:31:23 compute-0 affectionate_nash[222309]: }
Dec 04 10:31:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:23 compute-0 systemd[1]: libpod-a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be.scope: Deactivated successfully.
Dec 04 10:31:23 compute-0 podman[222227]: 2025-12-04 10:31:23.087544096 +0000 UTC m=+0.519059113 container died a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c-merged.mount: Deactivated successfully.
Dec 04 10:31:23 compute-0 podman[222227]: 2025-12-04 10:31:23.132611242 +0000 UTC m=+0.564126269 container remove a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:31:23 compute-0 systemd[1]: libpod-conmon-a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be.scope: Deactivated successfully.
Dec 04 10:31:23 compute-0 sudo[221987]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:23 compute-0 sudo[222406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhuksowgofcluetkaezgnejuujzuljb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844282.4551342-375-241304423602240/AnsiballZ_file.py'
Dec 04 10:31:23 compute-0 sudo[222406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:23 compute-0 sudo[222409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:31:23 compute-0 sudo[222409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:31:23 compute-0 sudo[222409]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:23 compute-0 sudo[222434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:31:23 compute-0 sudo[222434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:31:23 compute-0 python3.9[222408]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:23 compute-0 sudo[222406]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:23 compute-0 podman[222513]: 2025-12-04 10:31:23.571354359 +0000 UTC m=+0.039107633 container create a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:31:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:23 compute-0 systemd[1]: Started libpod-conmon-a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665.scope.
Dec 04 10:31:23 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:31:23 compute-0 podman[222513]: 2025-12-04 10:31:23.649327967 +0000 UTC m=+0.117081241 container init a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:31:23 compute-0 podman[222513]: 2025-12-04 10:31:23.555091494 +0000 UTC m=+0.022844788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:31:23 compute-0 podman[222513]: 2025-12-04 10:31:23.656523302 +0000 UTC m=+0.124276576 container start a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:31:23 compute-0 clever_booth[222564]: 167 167
Dec 04 10:31:23 compute-0 podman[222513]: 2025-12-04 10:31:23.662060516 +0000 UTC m=+0.129813820 container attach a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:31:23 compute-0 systemd[1]: libpod-a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665.scope: Deactivated successfully.
Dec 04 10:31:23 compute-0 conmon[222564]: conmon a96cb0910ec6e0e03026 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665.scope/container/memory.events
Dec 04 10:31:23 compute-0 podman[222513]: 2025-12-04 10:31:23.663538302 +0000 UTC m=+0.131291576 container died a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-44a14274776fb988382a37612240396233c4aafeac8fd13da355d2ac198a5104-merged.mount: Deactivated successfully.
Dec 04 10:31:23 compute-0 podman[222513]: 2025-12-04 10:31:23.73413496 +0000 UTC m=+0.201888244 container remove a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:31:23 compute-0 systemd[1]: libpod-conmon-a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665.scope: Deactivated successfully.
Dec 04 10:31:23 compute-0 sudo[222655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdlgglaquyvrgomtchognmasbbthqiwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844283.5276616-387-274784736383907/AnsiballZ_systemd.py'
Dec 04 10:31:23 compute-0 sudo[222655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:23 compute-0 podman[222663]: 2025-12-04 10:31:23.897197538 +0000 UTC m=+0.046983084 container create c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:31:23 compute-0 systemd[1]: Started libpod-conmon-c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193.scope.
Dec 04 10:31:23 compute-0 podman[222663]: 2025-12-04 10:31:23.87840214 +0000 UTC m=+0.028187696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:31:23 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:23 compute-0 podman[222663]: 2025-12-04 10:31:23.990699904 +0000 UTC m=+0.140485460 container init c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:31:23 compute-0 podman[222663]: 2025-12-04 10:31:23.99628356 +0000 UTC m=+0.146069106 container start c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:31:23 compute-0 podman[222663]: 2025-12-04 10:31:23.999825986 +0000 UTC m=+0.149611532 container attach c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:31:24 compute-0 python3.9[222657]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:31:24 compute-0 systemd[1]: Reloading.
Dec 04 10:31:24 compute-0 ceph-mon[75358]: pgmap v616: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:24 compute-0 systemd-rc-local-generator[222712]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:31:24 compute-0 systemd-sysv-generator[222715]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:31:24 compute-0 sudo[222655]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:24 compute-0 lvm[222843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:31:24 compute-0 lvm[222844]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:31:24 compute-0 lvm[222844]: VG ceph_vg1 finished
Dec 04 10:31:24 compute-0 lvm[222843]: VG ceph_vg0 finished
Dec 04 10:31:24 compute-0 lvm[222855]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:31:24 compute-0 lvm[222855]: VG ceph_vg2 finished
Dec 04 10:31:24 compute-0 dazzling_khorana[222680]: {}
Dec 04 10:31:24 compute-0 podman[222663]: 2025-12-04 10:31:24.816610862 +0000 UTC m=+0.966396428 container died c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:31:24 compute-0 systemd[1]: libpod-c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193.scope: Deactivated successfully.
Dec 04 10:31:24 compute-0 systemd[1]: libpod-c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193.scope: Consumed 1.307s CPU time.
Dec 04 10:31:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef-merged.mount: Deactivated successfully.
Dec 04 10:31:24 compute-0 podman[222663]: 2025-12-04 10:31:24.86788717 +0000 UTC m=+1.017672716 container remove c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:31:24 compute-0 systemd[1]: libpod-conmon-c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193.scope: Deactivated successfully.
Dec 04 10:31:24 compute-0 sudo[222962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsiouljpjjpxurfezolsrvrpavuqkday ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844284.643641-395-13366957107474/AnsiballZ_stat.py'
Dec 04 10:31:24 compute-0 sudo[222434]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:24 compute-0 sudo[222962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:31:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:31:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:31:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:31:24 compute-0 sudo[222965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:31:24 compute-0 sudo[222965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:31:24 compute-0 sudo[222965]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:25 compute-0 python3.9[222964]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:25 compute-0 sudo[222962]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:25 compute-0 sudo[223078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhrxoxlpajkfptmawrexieclhkovzafn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844284.643641-395-13366957107474/AnsiballZ_file.py'
Dec 04 10:31:25 compute-0 sudo[223078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:25 compute-0 podman[223039]: 2025-12-04 10:31:25.413822175 +0000 UTC m=+0.091327673 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec 04 10:31:25 compute-0 python3.9[223086]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:25 compute-0 sudo[223078]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:31:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:31:25 compute-0 ceph-mon[75358]: pgmap v617: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:26 compute-0 sudo[223243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksoxitqcpvzljwytncfpqwolccxqovxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844285.7783754-407-154376030433915/AnsiballZ_stat.py'
Dec 04 10:31:26 compute-0 sudo[223243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:26 compute-0 python3.9[223245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:26 compute-0 sudo[223243]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:26 compute-0 sudo[223335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iakohxdkferdxlpjxzjxcbsfascrhqkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844285.7783754-407-154376030433915/AnsiballZ_file.py'
Dec 04 10:31:26 compute-0 sudo[223335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:26 compute-0 podman[223295]: 2025-12-04 10:31:26.499905565 +0000 UTC m=+0.049742542 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 04 10:31:26 compute-0 python3.9[223343]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:26 compute-0 sudo[223335]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:31:26
Dec 04 10:31:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:31:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:31:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'volumes', 'vms']
Dec 04 10:31:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:31:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:27 compute-0 sudo[223493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgbyqxvzceqffrrrtamvfodctnhmcaqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844286.8460617-419-224188489995411/AnsiballZ_systemd.py'
Dec 04 10:31:27 compute-0 sudo[223493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:27 compute-0 python3.9[223495]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:31:27 compute-0 systemd[1]: Reloading.
Dec 04 10:31:27 compute-0 systemd-rc-local-generator[223524]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:31:27 compute-0 systemd-sysv-generator[223528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:31:27 compute-0 systemd[1]: Starting Create netns directory...
Dec 04 10:31:27 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 04 10:31:27 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 04 10:31:27 compute-0 systemd[1]: Finished Create netns directory.
Dec 04 10:31:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:31:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:31:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:31:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:31:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:31:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:31:27 compute-0 sudo[223493]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:31:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:31:28 compute-0 ceph-mon[75358]: pgmap v618: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:28 compute-0 sudo[223686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzpybcowxsivjfxzmtkbgmrxypujenkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844288.236245-429-131041917631503/AnsiballZ_file.py'
Dec 04 10:31:28 compute-0 sudo[223686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:28 compute-0 python3.9[223688]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:31:28 compute-0 sudo[223686]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:29 compute-0 sudo[223838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnvvleztbjveljpjersqbxgzvfwsgrnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844289.018317-437-14735958788773/AnsiballZ_stat.py'
Dec 04 10:31:29 compute-0 sudo[223838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:29 compute-0 python3.9[223840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:29 compute-0 sudo[223838]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:29 compute-0 sudo[223961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgbygdlnhetspsietmwsmwpckszwvdmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844289.018317-437-14735958788773/AnsiballZ_copy.py'
Dec 04 10:31:29 compute-0 sudo[223961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:30 compute-0 python3.9[223963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844289.018317-437-14735958788773/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:31:30 compute-0 sudo[223961]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:30 compute-0 ceph-mon[75358]: pgmap v619: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:30 compute-0 sudo[224113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qttkfkbsvdvcbvigbsdafpdfpnpouawj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844290.3412879-454-180580509847286/AnsiballZ_file.py'
Dec 04 10:31:30 compute-0 sudo[224113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:30 compute-0 python3.9[224115]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:31:30 compute-0 sudo[224113]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:31 compute-0 sudo[224265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txedfhbdqtswiecatzxfuongnnlsvwas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844290.9734116-462-208817445594135/AnsiballZ_stat.py'
Dec 04 10:31:31 compute-0 sudo[224265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:31 compute-0 python3.9[224267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:31 compute-0 sudo[224265]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:31 compute-0 sudo[224388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llwyxyyxjwdpdixzwvtonrtmczpgtple ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844290.9734116-462-208817445594135/AnsiballZ_copy.py'
Dec 04 10:31:31 compute-0 sudo[224388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:32 compute-0 python3.9[224390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844290.9734116-462-208817445594135/.source.json _original_basename=.g71v3mer follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:32 compute-0 sudo[224388]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:32 compute-0 ceph-mon[75358]: pgmap v620: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:32 compute-0 sudo[224540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwudmbhtbbodcktnvmzrkvskmjvegpnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844292.284562-477-81738463855413/AnsiballZ_file.py'
Dec 04 10:31:32 compute-0 sudo[224540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:32 compute-0 python3.9[224542]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:32 compute-0 sudo[224540]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:33 compute-0 sudo[224692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auxptyrzmcootifwnapuyzgtoklocvyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844292.9712946-485-170214177609594/AnsiballZ_stat.py'
Dec 04 10:31:33 compute-0 sudo[224692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:33 compute-0 sudo[224692]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:33 compute-0 sudo[224815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhoddshxfalowgqzvqclahpexydrxjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844292.9712946-485-170214177609594/AnsiballZ_copy.py'
Dec 04 10:31:33 compute-0 sudo[224815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:33 compute-0 sudo[224815]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:34 compute-0 ceph-mon[75358]: pgmap v621: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:34 compute-0 sudo[224967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytnxzyrpfoddhlqebtxucbqjchifjnbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844294.2337394-502-138131600471048/AnsiballZ_container_config_data.py'
Dec 04 10:31:34 compute-0 sudo[224967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:34 compute-0 python3.9[224969]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 04 10:31:34 compute-0 sudo[224967]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:35 compute-0 sudo[225119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asjuqnhiwisoechyqfzcyvzpirklnvgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844295.0624242-511-180025979310169/AnsiballZ_container_config_hash.py'
Dec 04 10:31:35 compute-0 sudo[225119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:35 compute-0 python3.9[225121]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 04 10:31:35 compute-0 sudo[225119]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:36 compute-0 ceph-mon[75358]: pgmap v622: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:36 compute-0 sudo[225271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwbboqzfjhbfnqgkfpcckvucefakerrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844295.9427283-520-205274272632788/AnsiballZ_podman_container_info.py'
Dec 04 10:31:36 compute-0 sudo[225271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:36 compute-0 python3.9[225273]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 04 10:31:36 compute-0 sudo[225271]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:31:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:31:38 compute-0 sudo[225448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrxvnydwmtxcxgcmmfrsplzrkeeuwjxj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764844297.5807133-533-59727917018554/AnsiballZ_edpm_container_manage.py'
Dec 04 10:31:38 compute-0 sudo[225448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:38 compute-0 ceph-mon[75358]: pgmap v623: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:38 compute-0 python3[225450]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 04 10:31:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:39 compute-0 podman[225463]: 2025-12-04 10:31:39.514373472 +0000 UTC m=+1.104218003 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 04 10:31:39 compute-0 podman[225520]: 2025-12-04 10:31:39.677948112 +0000 UTC m=+0.054413785 container create fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 04 10:31:39 compute-0 podman[225520]: 2025-12-04 10:31:39.649416617 +0000 UTC m=+0.025882340 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 04 10:31:39 compute-0 python3[225450]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 04 10:31:39 compute-0 sudo[225448]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:40 compute-0 ceph-mon[75358]: pgmap v624: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:40 compute-0 sudo[225708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydhasjshyzxsvfpwmjnkhdjqwqkbgobx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844299.9729114-541-92292975816852/AnsiballZ_stat.py'
Dec 04 10:31:40 compute-0 sudo[225708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:40 compute-0 python3.9[225710]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:31:40 compute-0 sudo[225708]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:41 compute-0 sudo[225862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrelxupckmcvxlleyzxtiaigcqafosyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844300.7620108-550-162799515830996/AnsiballZ_file.py'
Dec 04 10:31:41 compute-0 sudo[225862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:41 compute-0 python3.9[225864]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:41 compute-0 sudo[225862]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:41 compute-0 sudo[225938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcuskkpcqxsliqblhottznrqlpxuvysk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844300.7620108-550-162799515830996/AnsiballZ_stat.py'
Dec 04 10:31:41 compute-0 sudo[225938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:41 compute-0 python3.9[225940]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:31:41 compute-0 sudo[225938]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:42 compute-0 sudo[226089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qazwigbflicyhlabujghgncejrxrhzln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844301.651559-550-118408734241082/AnsiballZ_copy.py'
Dec 04 10:31:42 compute-0 sudo[226089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:42 compute-0 ceph-mon[75358]: pgmap v625: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:42 compute-0 python3.9[226091]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764844301.651559-550-118408734241082/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:42 compute-0 sudo[226089]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:42 compute-0 sudo[226165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woailshmmzgqsbttkntrnqkowxgfbqfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844301.651559-550-118408734241082/AnsiballZ_systemd.py'
Dec 04 10:31:42 compute-0 sudo[226165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:42 compute-0 python3.9[226167]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 04 10:31:42 compute-0 systemd[1]: Reloading.
Dec 04 10:31:43 compute-0 systemd-rc-local-generator[226190]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:31:43 compute-0 systemd-sysv-generator[226195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:31:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:43 compute-0 sudo[226165]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:43 compute-0 sudo[226276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bukpscutupikdhfnimojzmusztpidvio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844301.651559-550-118408734241082/AnsiballZ_systemd.py'
Dec 04 10:31:43 compute-0 sudo[226276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:43 compute-0 python3.9[226278]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:31:43 compute-0 systemd[1]: Reloading.
Dec 04 10:31:43 compute-0 systemd-sysv-generator[226312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:31:43 compute-0 systemd-rc-local-generator[226308]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:31:44 compute-0 systemd[1]: Starting multipathd container...
Dec 04 10:31:44 compute-0 ceph-mon[75358]: pgmap v626: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:44 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.
Dec 04 10:31:44 compute-0 podman[226318]: 2025-12-04 10:31:44.324224649 +0000 UTC m=+0.119327344 container init fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 04 10:31:44 compute-0 multipathd[226333]: + sudo -E kolla_set_configs
Dec 04 10:31:44 compute-0 podman[226318]: 2025-12-04 10:31:44.350945369 +0000 UTC m=+0.146048054 container start fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 04 10:31:44 compute-0 podman[226318]: multipathd
Dec 04 10:31:44 compute-0 sudo[226339]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 04 10:31:44 compute-0 sudo[226339]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 04 10:31:44 compute-0 sudo[226339]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 04 10:31:44 compute-0 systemd[1]: Started multipathd container.
Dec 04 10:31:44 compute-0 sudo[226276]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:44 compute-0 multipathd[226333]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 04 10:31:44 compute-0 multipathd[226333]: INFO:__main__:Validating config file
Dec 04 10:31:44 compute-0 multipathd[226333]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 04 10:31:44 compute-0 multipathd[226333]: INFO:__main__:Writing out command to execute
Dec 04 10:31:44 compute-0 sudo[226339]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:44 compute-0 multipathd[226333]: ++ cat /run_command
Dec 04 10:31:44 compute-0 multipathd[226333]: + CMD='/usr/sbin/multipathd -d'
Dec 04 10:31:44 compute-0 multipathd[226333]: + ARGS=
Dec 04 10:31:44 compute-0 multipathd[226333]: + sudo kolla_copy_cacerts
Dec 04 10:31:44 compute-0 sudo[226361]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 04 10:31:44 compute-0 sudo[226361]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 04 10:31:44 compute-0 sudo[226361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 04 10:31:44 compute-0 sudo[226361]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:44 compute-0 multipathd[226333]: + [[ ! -n '' ]]
Dec 04 10:31:44 compute-0 multipathd[226333]: + . kolla_extend_start
Dec 04 10:31:44 compute-0 multipathd[226333]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 04 10:31:44 compute-0 multipathd[226333]: Running command: '/usr/sbin/multipathd -d'
Dec 04 10:31:44 compute-0 multipathd[226333]: + umask 0022
Dec 04 10:31:44 compute-0 multipathd[226333]: + exec /usr/sbin/multipathd -d
Dec 04 10:31:44 compute-0 podman[226340]: 2025-12-04 10:31:44.451122038 +0000 UTC m=+0.087768858 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:31:44 compute-0 multipathd[226333]: 3414.098598 | --------start up--------
Dec 04 10:31:44 compute-0 multipathd[226333]: 3414.098621 | read /etc/multipath.conf
Dec 04 10:31:44 compute-0 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-199efdf6cc04dfc4.service: Main process exited, code=exited, status=1/FAILURE
Dec 04 10:31:44 compute-0 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-199efdf6cc04dfc4.service: Failed with result 'exit-code'.
Dec 04 10:31:44 compute-0 multipathd[226333]: 3414.105702 | path checkers start up
Dec 04 10:31:44 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:31:44 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:31:45 compute-0 python3.9[226522]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:31:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:45 compute-0 sudo[226674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxnvbpkukpkqraozihhtzjcrqkztdast ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844305.256076-586-256119250639704/AnsiballZ_command.py'
Dec 04 10:31:45 compute-0 sudo[226674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:45 compute-0 python3.9[226676]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:31:45 compute-0 sudo[226674]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:46 compute-0 ceph-mon[75358]: pgmap v627: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:46 compute-0 sudo[226839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teqnmpriukcukupdgcflgaimwmkdsabu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844306.0404494-594-166579100232866/AnsiballZ_systemd.py'
Dec 04 10:31:46 compute-0 sudo[226839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:46 compute-0 python3.9[226841]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:31:46 compute-0 systemd[1]: Stopping multipathd container...
Dec 04 10:31:46 compute-0 multipathd[226333]: 3416.375332 | exit (signal)
Dec 04 10:31:46 compute-0 multipathd[226333]: 3416.375444 | --------shut down-------
Dec 04 10:31:46 compute-0 systemd[1]: libpod-fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.scope: Deactivated successfully.
Dec 04 10:31:46 compute-0 conmon[226333]: conmon fe10987cdf96bb2ef3a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.scope/container/memory.events
Dec 04 10:31:46 compute-0 podman[226845]: 2025-12-04 10:31:46.762521826 +0000 UTC m=+0.080664635 container stop fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:31:46 compute-0 podman[226845]: 2025-12-04 10:31:46.793294245 +0000 UTC m=+0.111437074 container died fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec 04 10:31:46 compute-0 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-199efdf6cc04dfc4.timer: Deactivated successfully.
Dec 04 10:31:46 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.
Dec 04 10:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-userdata-shm.mount: Deactivated successfully.
Dec 04 10:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670-merged.mount: Deactivated successfully.
Dec 04 10:31:47 compute-0 podman[226845]: 2025-12-04 10:31:47.036441991 +0000 UTC m=+0.354584810 container cleanup fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 04 10:31:47 compute-0 podman[226845]: multipathd
Dec 04 10:31:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:47 compute-0 podman[226874]: multipathd
Dec 04 10:31:47 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 04 10:31:47 compute-0 systemd[1]: Stopped multipathd container.
Dec 04 10:31:47 compute-0 systemd[1]: Starting multipathd container...
Dec 04 10:31:47 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:31:47 compute-0 ceph-mon[75358]: pgmap v628: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 04 10:31:47 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.
Dec 04 10:31:47 compute-0 podman[226887]: 2025-12-04 10:31:47.268331534 +0000 UTC m=+0.119512410 container init fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 04 10:31:47 compute-0 multipathd[226903]: + sudo -E kolla_set_configs
Dec 04 10:31:47 compute-0 podman[226887]: 2025-12-04 10:31:47.293728902 +0000 UTC m=+0.144909728 container start fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 04 10:31:47 compute-0 sudo[226909]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 04 10:31:47 compute-0 sudo[226909]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 04 10:31:47 compute-0 sudo[226909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 04 10:31:47 compute-0 podman[226887]: multipathd
Dec 04 10:31:47 compute-0 systemd[1]: Started multipathd container.
Dec 04 10:31:47 compute-0 sudo[226839]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:47 compute-0 multipathd[226903]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 04 10:31:47 compute-0 multipathd[226903]: INFO:__main__:Validating config file
Dec 04 10:31:47 compute-0 multipathd[226903]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 04 10:31:47 compute-0 multipathd[226903]: INFO:__main__:Writing out command to execute
Dec 04 10:31:47 compute-0 sudo[226909]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:47 compute-0 multipathd[226903]: ++ cat /run_command
Dec 04 10:31:47 compute-0 podman[226910]: 2025-12-04 10:31:47.368886841 +0000 UTC m=+0.065882524 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec 04 10:31:47 compute-0 multipathd[226903]: + CMD='/usr/sbin/multipathd -d'
Dec 04 10:31:47 compute-0 multipathd[226903]: + ARGS=
Dec 04 10:31:47 compute-0 multipathd[226903]: + sudo kolla_copy_cacerts
Dec 04 10:31:47 compute-0 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-2e948bfe886be7a5.service: Main process exited, code=exited, status=1/FAILURE
Dec 04 10:31:47 compute-0 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-2e948bfe886be7a5.service: Failed with result 'exit-code'.
Dec 04 10:31:47 compute-0 sudo[226936]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 04 10:31:47 compute-0 sudo[226936]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 04 10:31:47 compute-0 sudo[226936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 04 10:31:47 compute-0 sudo[226936]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:47 compute-0 multipathd[226903]: + [[ ! -n '' ]]
Dec 04 10:31:47 compute-0 multipathd[226903]: + . kolla_extend_start
Dec 04 10:31:47 compute-0 multipathd[226903]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 04 10:31:47 compute-0 multipathd[226903]: Running command: '/usr/sbin/multipathd -d'
Dec 04 10:31:47 compute-0 multipathd[226903]: + umask 0022
Dec 04 10:31:47 compute-0 multipathd[226903]: + exec /usr/sbin/multipathd -d
Dec 04 10:31:47 compute-0 multipathd[226903]: 3417.046486 | --------start up--------
Dec 04 10:31:47 compute-0 multipathd[226903]: 3417.046509 | read /etc/multipath.conf
Dec 04 10:31:47 compute-0 multipathd[226903]: 3417.052272 | path checkers start up
Dec 04 10:31:47 compute-0 sudo[227092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhcqefvtiolbgfbmkxnzpyvbdevckpql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844307.4968834-602-79241914426726/AnsiballZ_file.py'
Dec 04 10:31:47 compute-0 sudo[227092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:47 compute-0 python3.9[227094]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:47 compute-0 sudo[227092]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:48 compute-0 sudo[227244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iragfzlxnvpiiklxemleuwaflvjkejqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844308.2704508-614-83111072543953/AnsiballZ_file.py'
Dec 04 10:31:48 compute-0 sudo[227244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.599905) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308600121, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1744, "num_deletes": 250, "total_data_size": 2956038, "memory_usage": 2991552, "flush_reason": "Manual Compaction"}
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308612843, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1670440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11794, "largest_seqno": 13537, "table_properties": {"data_size": 1664710, "index_size": 2869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14340, "raw_average_key_size": 20, "raw_value_size": 1652065, "raw_average_value_size": 2317, "num_data_blocks": 132, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844111, "oldest_key_time": 1764844111, "file_creation_time": 1764844308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 12979 microseconds, and 4850 cpu microseconds.
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.612899) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1670440 bytes OK
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.612928) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.614986) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.615001) EVENT_LOG_v1 {"time_micros": 1764844308614997, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.615021) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2948620, prev total WAL file size 2948620, number of live WAL files 2.
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.615799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1631KB)], [29(7867KB)]
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308615887, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9726619, "oldest_snapshot_seqno": -1}
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4041 keys, 7623222 bytes, temperature: kUnknown
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308693462, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7623222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7594722, "index_size": 17318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 96276, "raw_average_key_size": 23, "raw_value_size": 7520371, "raw_average_value_size": 1861, "num_data_blocks": 754, "num_entries": 4041, "num_filter_entries": 4041, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.693786) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7623222 bytes
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.695330) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.2 rd, 98.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.4) write-amplify(4.6) OK, records in: 4460, records dropped: 419 output_compression: NoCompression
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.695347) EVENT_LOG_v1 {"time_micros": 1764844308695339, "job": 12, "event": "compaction_finished", "compaction_time_micros": 77711, "compaction_time_cpu_micros": 18094, "output_level": 6, "num_output_files": 1, "total_output_size": 7623222, "num_input_records": 4460, "num_output_records": 4041, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308695686, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308696731, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.615680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:31:48 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:31:48 compute-0 python3.9[227246]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 04 10:31:48 compute-0 sudo[227244]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:49 compute-0 sudo[227396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfdmqtktpydsigoeirazrkabzdmcfwqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844308.9837613-622-94178092668165/AnsiballZ_modprobe.py'
Dec 04 10:31:49 compute-0 sudo[227396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:49 compute-0 python3.9[227398]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 04 10:31:49 compute-0 kernel: Key type psk registered
Dec 04 10:31:49 compute-0 sudo[227396]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:49 compute-0 ceph-mon[75358]: pgmap v629: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:49 compute-0 sudo[227560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntopzfzsdurunzsnumsqpnylrrjfwyko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844309.708103-630-158719632535457/AnsiballZ_stat.py'
Dec 04 10:31:49 compute-0 sudo[227560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:50 compute-0 python3.9[227562]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:31:50 compute-0 sudo[227560]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:50 compute-0 sudo[227683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loosussjglfypeiznhvizxsgdmcotsgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844309.708103-630-158719632535457/AnsiballZ_copy.py'
Dec 04 10:31:50 compute-0 sudo[227683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:50 compute-0 python3.9[227685]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844309.708103-630-158719632535457/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:50 compute-0 sudo[227683]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:51 compute-0 sudo[227835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvdfshwzbdnuzrdtcibwkjvyixbuluvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844310.9447584-646-72286650540932/AnsiballZ_lineinfile.py'
Dec 04 10:31:51 compute-0 sudo[227835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:51 compute-0 python3.9[227837]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:31:51 compute-0 sudo[227835]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:52 compute-0 sudo[227987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgmzfghgxexbcrhkddothcqlmwrmyreu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844311.7227871-654-210476623659837/AnsiballZ_systemd.py'
Dec 04 10:31:52 compute-0 sudo[227987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:52 compute-0 ceph-mon[75358]: pgmap v630: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:52 compute-0 python3.9[227989]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:31:52 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 04 10:31:52 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 04 10:31:52 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 04 10:31:52 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 04 10:31:52 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 04 10:31:52 compute-0 sudo[227987]: pam_unix(sudo:session): session closed for user root
Dec 04 10:31:52 compute-0 sudo[228143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhnvzjaetmtlihhwecwobrkolrhzzxdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844312.639784-662-151137016551518/AnsiballZ_dnf.py'
Dec 04 10:31:52 compute-0 sudo[228143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:31:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:53 compute-0 python3.9[228145]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 04 10:31:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:54 compute-0 ceph-mon[75358]: pgmap v631: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:54 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 04 10:31:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:31:54.897 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:31:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:31:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:31:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:31:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:31:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:55 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 04 10:31:55 compute-0 podman[228151]: 2025-12-04 10:31:55.756618336 +0000 UTC m=+0.129403800 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 04 10:31:56 compute-0 ceph-mon[75358]: pgmap v632: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:31:56 compute-0 podman[228179]: 2025-12-04 10:31:56.939170323 +0000 UTC m=+0.052666973 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 04 10:31:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec 04 10:31:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:31:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:31:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:31:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:31:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:31:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:31:58 compute-0 ceph-mon[75358]: pgmap v633: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec 04 10:31:58 compute-0 systemd[1]: Reloading.
Dec 04 10:31:58 compute-0 systemd-rc-local-generator[228222]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:31:58 compute-0 systemd-sysv-generator[228225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:31:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:31:58 compute-0 systemd[1]: Reloading.
Dec 04 10:31:58 compute-0 systemd-sysv-generator[228263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:31:58 compute-0 systemd-rc-local-generator[228260]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:31:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:31:59 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 04 10:31:59 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 04 10:31:59 compute-0 lvm[228310]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:31:59 compute-0 lvm[228310]: VG ceph_vg0 finished
Dec 04 10:31:59 compute-0 lvm[228311]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:31:59 compute-0 lvm[228311]: VG ceph_vg1 finished
Dec 04 10:31:59 compute-0 lvm[228313]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:31:59 compute-0 lvm[228313]: VG ceph_vg2 finished
Dec 04 10:31:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 04 10:31:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 04 10:31:59 compute-0 systemd[1]: Reloading.
Dec 04 10:31:59 compute-0 systemd-rc-local-generator[228363]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:31:59 compute-0 systemd-sysv-generator[228366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:31:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 04 10:32:00 compute-0 ceph-mon[75358]: pgmap v634: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:32:00 compute-0 sshd-session[228571]: Invalid user admin123 from 74.249.218.27 port 41428
Dec 04 10:32:00 compute-0 sshd-session[228571]: Received disconnect from 74.249.218.27 port 41428:11: Bye Bye [preauth]
Dec 04 10:32:00 compute-0 sshd-session[228571]: Disconnected from invalid user admin123 74.249.218.27 port 41428 [preauth]
Dec 04 10:32:00 compute-0 sshd-session[228270]: Invalid user superadmin from 103.149.86.230 port 49446
Dec 04 10:32:00 compute-0 sshd-session[228270]: Received disconnect from 103.149.86.230 port 49446:11: Bye Bye [preauth]
Dec 04 10:32:00 compute-0 sshd-session[228270]: Disconnected from invalid user superadmin 103.149.86.230 port 49446 [preauth]
Dec 04 10:32:00 compute-0 sudo[228143]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 04 10:32:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 04 10:32:00 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.599s CPU time.
Dec 04 10:32:00 compute-0 systemd[1]: run-rb576fe760eba4316866b9652e39ad915.service: Deactivated successfully.
Dec 04 10:32:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:32:01 compute-0 sudo[229655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fktdmlzzdjkskhvoutldzfgsrnssrmaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844320.8140917-670-105299070716287/AnsiballZ_systemd_service.py'
Dec 04 10:32:01 compute-0 sudo[229655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:01 compute-0 python3.9[229657]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:32:01 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 04 10:32:01 compute-0 iscsid[217362]: iscsid shutting down.
Dec 04 10:32:01 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 04 10:32:01 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 04 10:32:01 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 04 10:32:01 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 04 10:32:01 compute-0 systemd[1]: Started Open-iSCSI.
Dec 04 10:32:01 compute-0 sudo[229655]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:02 compute-0 ceph-mon[75358]: pgmap v635: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:32:02 compute-0 python3.9[229811]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 04 10:32:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:32:03 compute-0 sudo[229965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgsilsyhhfsbzfrcfmwaufxdlaauihrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844322.814038-688-7466848502762/AnsiballZ_file.py'
Dec 04 10:32:03 compute-0 sudo[229965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:03 compute-0 python3.9[229967]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:03 compute-0 sudo[229965]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:04 compute-0 sudo[230117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nixnoyggnjvlegventpuvfcwdrjfhqbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844323.7360117-699-143661666849064/AnsiballZ_systemd_service.py'
Dec 04 10:32:04 compute-0 sudo[230117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:04 compute-0 ceph-mon[75358]: pgmap v636: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:32:04 compute-0 python3.9[230119]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 04 10:32:04 compute-0 systemd[1]: Reloading.
Dec 04 10:32:04 compute-0 systemd-rc-local-generator[230146]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:32:04 compute-0 systemd-sysv-generator[230150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:32:04 compute-0 sudo[230117]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:32:05 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 04 10:32:05 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 04 10:32:05 compute-0 ceph-mon[75358]: pgmap v637: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:32:05 compute-0 python3.9[230306]: ansible-ansible.builtin.service_facts Invoked
Dec 04 10:32:05 compute-0 network[230323]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 04 10:32:05 compute-0 network[230324]: 'network-scripts' will be removed from distribution in near future.
Dec 04 10:32:05 compute-0 network[230325]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 04 10:32:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:32:08 compute-0 ceph-mon[75358]: pgmap v638: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 10:32:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 04 10:32:10 compute-0 ceph-mon[75358]: pgmap v639: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 04 10:32:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:11 compute-0 sudo[230600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmfwtjkmabdxmwbscblihkxigdqsduou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844331.3592658-718-75673043501705/AnsiballZ_systemd_service.py'
Dec 04 10:32:11 compute-0 sudo[230600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:11 compute-0 python3.9[230602]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:32:12 compute-0 sudo[230600]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:12 compute-0 ceph-mon[75358]: pgmap v640: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:12 compute-0 sshd-session[230463]: Invalid user superadmin from 103.179.218.243 port 42330
Dec 04 10:32:12 compute-0 sudo[230753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtbyjngcyotuxapmyerqsjfvfehuqqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844332.1519322-718-57122853202348/AnsiballZ_systemd_service.py'
Dec 04 10:32:12 compute-0 sudo[230753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:12 compute-0 sshd-session[230463]: Received disconnect from 103.179.218.243 port 42330:11: Bye Bye [preauth]
Dec 04 10:32:12 compute-0 sshd-session[230463]: Disconnected from invalid user superadmin 103.179.218.243 port 42330 [preauth]
Dec 04 10:32:12 compute-0 python3.9[230755]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:32:12 compute-0 sudo[230753]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:13 compute-0 sudo[230906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pshpbqidixdhucdtdrqowbjlbncpmore ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844332.9115193-718-195240016798723/AnsiballZ_systemd_service.py'
Dec 04 10:32:13 compute-0 sudo[230906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:13 compute-0 python3.9[230908]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:32:13 compute-0 sudo[230906]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:14 compute-0 ceph-mon[75358]: pgmap v641: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:14 compute-0 sudo[231059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzbggmoldqtretrsrdfrbdnyqvsrknic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844333.9158056-718-240408949600825/AnsiballZ_systemd_service.py'
Dec 04 10:32:14 compute-0 sudo[231059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:14 compute-0 python3.9[231061]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:32:14 compute-0 sudo[231059]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:14 compute-0 sudo[231212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acaznvsbqndriqntinemomiqkzqeifbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844334.688979-718-20907567615362/AnsiballZ_systemd_service.py'
Dec 04 10:32:14 compute-0 sudo[231212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:15 compute-0 python3.9[231214]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:32:15 compute-0 sudo[231212]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:15 compute-0 sudo[231365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwkmjptnpdfjwlzfptrohfawdxqpnjfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844335.383067-718-15282067615604/AnsiballZ_systemd_service.py'
Dec 04 10:32:15 compute-0 sudo[231365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:15 compute-0 python3.9[231367]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:32:16 compute-0 sudo[231365]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:16 compute-0 ceph-mon[75358]: pgmap v642: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:16 compute-0 sudo[231518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzohlnlzepvbwdkhikcdhrfudegbncmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844336.1537027-718-99423815875182/AnsiballZ_systemd_service.py'
Dec 04 10:32:16 compute-0 sudo[231518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:16 compute-0 python3.9[231520]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:32:16 compute-0 sudo[231518]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:17 compute-0 sudo[231671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vodwwxtysegyasoiqjnwxaxlaszsvfbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844336.9364986-718-161114635842198/AnsiballZ_systemd_service.py'
Dec 04 10:32:17 compute-0 sudo[231671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:17 compute-0 python3.9[231673]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:32:17 compute-0 sudo[231671]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:17 compute-0 podman[231675]: 2025-12-04 10:32:17.619543439 +0000 UTC m=+0.068830826 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:32:18 compute-0 ceph-mon[75358]: pgmap v643: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:18 compute-0 sudo[231844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgcxvxrtwyheoinsnogzvvnbwwaungiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844337.901266-777-160155949694135/AnsiballZ_file.py'
Dec 04 10:32:18 compute-0 sudo[231844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:18 compute-0 python3.9[231846]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:18 compute-0 sudo[231844]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:18 compute-0 sudo[231996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbqhfwioufxuxlbfihbyvghyqbfjjata ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844338.5474856-777-48690293273484/AnsiballZ_file.py'
Dec 04 10:32:18 compute-0 sudo[231996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:19 compute-0 python3.9[231998]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:19 compute-0 sudo[231996]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:19 compute-0 sudo[232148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmhhpeegzyictxzmejnebaaevdooogep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844339.1532238-777-94258858097163/AnsiballZ_file.py'
Dec 04 10:32:19 compute-0 sudo[232148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:19 compute-0 python3.9[232150]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:19 compute-0 sudo[232148]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:20 compute-0 sudo[232300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvogylfrticavporldqfapxrtwkueyhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844339.758824-777-97634065382841/AnsiballZ_file.py'
Dec 04 10:32:20 compute-0 sudo[232300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:20 compute-0 python3.9[232302]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:20 compute-0 sudo[232300]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:20 compute-0 ceph-mon[75358]: pgmap v644: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:20 compute-0 sudo[232452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdarsnigawgesziocyqmdegwbicjgfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844340.3566272-777-156430414112656/AnsiballZ_file.py'
Dec 04 10:32:20 compute-0 sudo[232452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:20 compute-0 python3.9[232454]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:20 compute-0 sudo[232452]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:21 compute-0 sudo[232604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvbopvsduujyaylbbrtmigtmikzifrly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844340.9783018-777-73145190937673/AnsiballZ_file.py'
Dec 04 10:32:21 compute-0 sudo[232604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:21 compute-0 ceph-mon[75358]: pgmap v645: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:21 compute-0 python3.9[232606]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:21 compute-0 sudo[232604]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:21 compute-0 sudo[232756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjetpiosbenfgwvhdqfcffdgkawylczw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844341.5591242-777-250526820347454/AnsiballZ_file.py'
Dec 04 10:32:21 compute-0 sudo[232756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:22 compute-0 python3.9[232758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:22 compute-0 sudo[232756]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:22 compute-0 sudo[232908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygznawyzkielklnxukbtsoxtmzuawtrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844342.1771412-777-209824361581452/AnsiballZ_file.py'
Dec 04 10:32:22 compute-0 sudo[232908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:22 compute-0 python3.9[232910]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:22 compute-0 sudo[232908]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:23 compute-0 sudo[233060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kluotltnoidznmfpkohotlxuwikfonhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844342.7847779-834-263185044246726/AnsiballZ_file.py'
Dec 04 10:32:23 compute-0 sudo[233060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:23 compute-0 python3.9[233062]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:23 compute-0 sudo[233060]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:23 compute-0 ceph-mon[75358]: pgmap v646: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:23 compute-0 sudo[233212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edanrkkumwtsvliopaleyvungkmbnrbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844343.387092-834-156553518432725/AnsiballZ_file.py'
Dec 04 10:32:23 compute-0 sudo[233212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:24 compute-0 python3.9[233214]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:24 compute-0 sudo[233212]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:24 compute-0 sudo[233364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsdhhanewkilwyvbfgamnqijsdmmnxgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844344.294888-834-52483825609069/AnsiballZ_file.py'
Dec 04 10:32:24 compute-0 sudo[233364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:24 compute-0 python3.9[233366]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:24 compute-0 sudo[233364]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:25 compute-0 sudo[233466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:32:25 compute-0 sudo[233466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:32:25 compute-0 sudo[233466]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:25 compute-0 sudo[233491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:32:25 compute-0 sudo[233491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:32:25 compute-0 sudo[233566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgjwklxqfpmkvoucotwjruydwhgscngm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844344.8583508-834-199861804770145/AnsiballZ_file.py'
Dec 04 10:32:25 compute-0 sudo[233566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:25 compute-0 python3.9[233568]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:25 compute-0 sudo[233566]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:25 compute-0 sudo[233491]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:32:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:32:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:32:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:32:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:32:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:32:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:32:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:32:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:32:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:32:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:32:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:32:25 compute-0 sudo[233724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:32:25 compute-0 sudo[233724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:32:25 compute-0 sudo[233724]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:25 compute-0 sudo[233793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivjtyicjpepisgtprtqwudbhrtbsxthq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844345.5799077-834-82930388362698/AnsiballZ_file.py'
Dec 04 10:32:25 compute-0 sudo[233793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:25 compute-0 sudo[233780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:32:25 compute-0 sudo[233780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:32:25 compute-0 podman[233741]: 2025-12-04 10:32:25.928160388 +0000 UTC m=+0.113247957 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:32:26 compute-0 python3.9[233817]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:26 compute-0 sudo[233793]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:26 compute-0 ceph-mon[75358]: pgmap v647: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:32:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:32:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:32:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:32:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:32:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:32:26 compute-0 podman[233851]: 2025-12-04 10:32:26.17767733 +0000 UTC m=+0.052969009 container create a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:32:26 compute-0 systemd[1]: Started libpod-conmon-a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53.scope.
Dec 04 10:32:26 compute-0 podman[233851]: 2025-12-04 10:32:26.151910993 +0000 UTC m=+0.027202722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:32:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:32:26 compute-0 podman[233851]: 2025-12-04 10:32:26.272708563 +0000 UTC m=+0.148000242 container init a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:32:26 compute-0 podman[233851]: 2025-12-04 10:32:26.280682287 +0000 UTC m=+0.155973966 container start a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:32:26 compute-0 podman[233851]: 2025-12-04 10:32:26.284255594 +0000 UTC m=+0.159547273 container attach a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:32:26 compute-0 nice_aryabhata[233901]: 167 167
Dec 04 10:32:26 compute-0 systemd[1]: libpod-a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53.scope: Deactivated successfully.
Dec 04 10:32:26 compute-0 podman[233851]: 2025-12-04 10:32:26.287426461 +0000 UTC m=+0.162718140 container died a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 04 10:32:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-604d27f7f4f5d9c3c2f3a73506bec14d21c716c4ebb2f3ab723c22677ce31ce0-merged.mount: Deactivated successfully.
Dec 04 10:32:26 compute-0 podman[233851]: 2025-12-04 10:32:26.324163325 +0000 UTC m=+0.199455004 container remove a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:32:26 compute-0 systemd[1]: libpod-conmon-a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53.scope: Deactivated successfully.
Dec 04 10:32:26 compute-0 podman[233999]: 2025-12-04 10:32:26.505119768 +0000 UTC m=+0.048987823 container create 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:32:26 compute-0 sudo[234039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwfiwgvpswutixovnrpemdqgemupyneb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844346.2090588-834-252091030331217/AnsiballZ_file.py'
Dec 04 10:32:26 compute-0 sudo[234039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:26 compute-0 systemd[1]: Started libpod-conmon-27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1.scope.
Dec 04 10:32:26 compute-0 podman[233999]: 2025-12-04 10:32:26.481160515 +0000 UTC m=+0.025028590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:32:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:26 compute-0 podman[233999]: 2025-12-04 10:32:26.598910921 +0000 UTC m=+0.142778996 container init 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:32:26 compute-0 podman[233999]: 2025-12-04 10:32:26.609886338 +0000 UTC m=+0.153754393 container start 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:32:26 compute-0 podman[233999]: 2025-12-04 10:32:26.616286834 +0000 UTC m=+0.160154889 container attach 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:32:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:32:26
Dec 04 10:32:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:32:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:32:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.control', 'volumes', 'vms', '.rgw.root', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 04 10:32:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:32:26 compute-0 python3.9[234042]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:26 compute-0 sudo[234039]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:27 compute-0 intelligent_leakey[234045]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:32:27 compute-0 intelligent_leakey[234045]: --> All data devices are unavailable
Dec 04 10:32:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:27 compute-0 systemd[1]: libpod-27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1.scope: Deactivated successfully.
Dec 04 10:32:27 compute-0 podman[233999]: 2025-12-04 10:32:27.154922041 +0000 UTC m=+0.698790116 container died 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:32:27 compute-0 sudo[234227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbdatsyyoyeqmcufvocxzirdkmnuyfel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844346.8798614-834-47271535038112/AnsiballZ_file.py'
Dec 04 10:32:27 compute-0 sudo[234227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:27 compute-0 python3.9[234240]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:27 compute-0 sudo[234227]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf-merged.mount: Deactivated successfully.
Dec 04 10:32:27 compute-0 ceph-mon[75358]: pgmap v648: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:27 compute-0 podman[233999]: 2025-12-04 10:32:27.539660924 +0000 UTC m=+1.083528979 container remove 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:32:27 compute-0 podman[234190]: 2025-12-04 10:32:27.543366935 +0000 UTC m=+0.406737860 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:32:27 compute-0 sudo[233780]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:27 compute-0 systemd[1]: libpod-conmon-27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1.scope: Deactivated successfully.
Dec 04 10:32:27 compute-0 sudo[234306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:32:27 compute-0 sudo[234306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:32:27 compute-0 sudo[234306]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:27 compute-0 sshd-session[234188]: Invalid user ionadmin from 217.154.62.22 port 40880
Dec 04 10:32:27 compute-0 sudo[234353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:32:27 compute-0 sudo[234353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:32:27 compute-0 sshd-session[234188]: Received disconnect from 217.154.62.22 port 40880:11: Bye Bye [preauth]
Dec 04 10:32:27 compute-0 sshd-session[234188]: Disconnected from invalid user ionadmin 217.154.62.22 port 40880 [preauth]
Dec 04 10:32:27 compute-0 sudo[234450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nttkbvhbbvcnukbsgrxzxjizmhfgohrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844347.5628436-834-235389973997298/AnsiballZ_file.py'
Dec 04 10:32:27 compute-0 sudo[234450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:32:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:32:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:32:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:32:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:32:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:32:28 compute-0 podman[234465]: 2025-12-04 10:32:28.022313642 +0000 UTC m=+0.047858519 container create 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:32:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:32:28 compute-0 systemd[1]: Started libpod-conmon-18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733.scope.
Dec 04 10:32:28 compute-0 python3.9[234452]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:32:28 compute-0 podman[234465]: 2025-12-04 10:32:27.997074901 +0000 UTC m=+0.022619798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:32:28 compute-0 sudo[234450]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:28 compute-0 podman[234465]: 2025-12-04 10:32:28.116462242 +0000 UTC m=+0.142007229 container init 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:32:28 compute-0 podman[234465]: 2025-12-04 10:32:28.125591813 +0000 UTC m=+0.151136690 container start 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Dec 04 10:32:28 compute-0 podman[234465]: 2025-12-04 10:32:28.129425695 +0000 UTC m=+0.154970692 container attach 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:32:28 compute-0 nifty_wright[234482]: 167 167
Dec 04 10:32:28 compute-0 systemd[1]: libpod-18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733.scope: Deactivated successfully.
Dec 04 10:32:28 compute-0 podman[234465]: 2025-12-04 10:32:28.133684298 +0000 UTC m=+0.159229165 container died 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-249efb296f630a14b4e97a0ce5f9451c5f77e1b5e0a174b1f619b77bcc847a48-merged.mount: Deactivated successfully.
Dec 04 10:32:28 compute-0 podman[234465]: 2025-12-04 10:32:28.183414723 +0000 UTC m=+0.208959620 container remove 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:32:28 compute-0 systemd[1]: libpod-conmon-18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733.scope: Deactivated successfully.
Dec 04 10:32:28 compute-0 podman[234528]: 2025-12-04 10:32:28.35797661 +0000 UTC m=+0.045321199 container create 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:32:28 compute-0 systemd[1]: Started libpod-conmon-906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0.scope.
Dec 04 10:32:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:28 compute-0 podman[234528]: 2025-12-04 10:32:28.338942498 +0000 UTC m=+0.026287117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:32:28 compute-0 podman[234528]: 2025-12-04 10:32:28.43770897 +0000 UTC m=+0.125053579 container init 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:32:28 compute-0 podman[234528]: 2025-12-04 10:32:28.446060092 +0000 UTC m=+0.133404681 container start 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:32:28 compute-0 podman[234528]: 2025-12-04 10:32:28.449947206 +0000 UTC m=+0.137291815 container attach 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:32:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:28 compute-0 sudo[234679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqznavzswafxydkkknrcyamvcuracgkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844348.4021258-892-81414475328872/AnsiballZ_command.py'
Dec 04 10:32:28 compute-0 sudo[234679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:28 compute-0 happy_saha[234565]: {
Dec 04 10:32:28 compute-0 happy_saha[234565]:     "0": [
Dec 04 10:32:28 compute-0 happy_saha[234565]:         {
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "devices": [
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "/dev/loop3"
Dec 04 10:32:28 compute-0 happy_saha[234565]:             ],
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_name": "ceph_lv0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_size": "21470642176",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "name": "ceph_lv0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "tags": {
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.cluster_name": "ceph",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.crush_device_class": "",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.encrypted": "0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.objectstore": "bluestore",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.osd_id": "0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.type": "block",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.vdo": "0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.with_tpm": "0"
Dec 04 10:32:28 compute-0 happy_saha[234565]:             },
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "type": "block",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "vg_name": "ceph_vg0"
Dec 04 10:32:28 compute-0 happy_saha[234565]:         }
Dec 04 10:32:28 compute-0 happy_saha[234565]:     ],
Dec 04 10:32:28 compute-0 happy_saha[234565]:     "1": [
Dec 04 10:32:28 compute-0 happy_saha[234565]:         {
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "devices": [
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "/dev/loop4"
Dec 04 10:32:28 compute-0 happy_saha[234565]:             ],
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_name": "ceph_lv1",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_size": "21470642176",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "name": "ceph_lv1",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "tags": {
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.cluster_name": "ceph",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.crush_device_class": "",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.encrypted": "0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.objectstore": "bluestore",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.osd_id": "1",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.type": "block",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.vdo": "0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.with_tpm": "0"
Dec 04 10:32:28 compute-0 happy_saha[234565]:             },
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "type": "block",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "vg_name": "ceph_vg1"
Dec 04 10:32:28 compute-0 happy_saha[234565]:         }
Dec 04 10:32:28 compute-0 happy_saha[234565]:     ],
Dec 04 10:32:28 compute-0 happy_saha[234565]:     "2": [
Dec 04 10:32:28 compute-0 happy_saha[234565]:         {
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "devices": [
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "/dev/loop5"
Dec 04 10:32:28 compute-0 happy_saha[234565]:             ],
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_name": "ceph_lv2",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_size": "21470642176",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "name": "ceph_lv2",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "tags": {
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.cluster_name": "ceph",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.crush_device_class": "",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.encrypted": "0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.objectstore": "bluestore",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.osd_id": "2",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.type": "block",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.vdo": "0",
Dec 04 10:32:28 compute-0 happy_saha[234565]:                 "ceph.with_tpm": "0"
Dec 04 10:32:28 compute-0 happy_saha[234565]:             },
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "type": "block",
Dec 04 10:32:28 compute-0 happy_saha[234565]:             "vg_name": "ceph_vg2"
Dec 04 10:32:28 compute-0 happy_saha[234565]:         }
Dec 04 10:32:28 compute-0 happy_saha[234565]:     ]
Dec 04 10:32:28 compute-0 happy_saha[234565]: }
Dec 04 10:32:28 compute-0 systemd[1]: libpod-906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0.scope: Deactivated successfully.
Dec 04 10:32:28 compute-0 podman[234528]: 2025-12-04 10:32:28.808520617 +0000 UTC m=+0.495865216 container died 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43-merged.mount: Deactivated successfully.
Dec 04 10:32:28 compute-0 podman[234528]: 2025-12-04 10:32:28.872497687 +0000 UTC m=+0.559842276 container remove 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:32:28 compute-0 systemd[1]: libpod-conmon-906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0.scope: Deactivated successfully.
Dec 04 10:32:28 compute-0 sudo[234353]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:28 compute-0 python3.9[234681]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:32:28 compute-0 sudo[234679]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:29 compute-0 sudo[234695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:32:29 compute-0 sudo[234695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:32:29 compute-0 sudo[234695]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:29 compute-0 sudo[234726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:32:29 compute-0 sudo[234726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:32:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:29 compute-0 podman[234834]: 2025-12-04 10:32:29.345318635 +0000 UTC m=+0.026781659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:32:29 compute-0 podman[234834]: 2025-12-04 10:32:29.445047419 +0000 UTC m=+0.126510383 container create 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:32:29 compute-0 systemd[1]: Started libpod-conmon-7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846.scope.
Dec 04 10:32:29 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:32:29 compute-0 podman[234834]: 2025-12-04 10:32:29.554002677 +0000 UTC m=+0.235465641 container init 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:32:29 compute-0 podman[234834]: 2025-12-04 10:32:29.563415986 +0000 UTC m=+0.244878920 container start 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:32:29 compute-0 podman[234834]: 2025-12-04 10:32:29.56771741 +0000 UTC m=+0.249180374 container attach 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 04 10:32:29 compute-0 eager_gagarin[234873]: 167 167
Dec 04 10:32:29 compute-0 systemd[1]: libpod-7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846.scope: Deactivated successfully.
Dec 04 10:32:29 compute-0 podman[234834]: 2025-12-04 10:32:29.57106956 +0000 UTC m=+0.252532554 container died 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:32:29 compute-0 python3.9[234940]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 04 10:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce9e78cef61cb8abdc5bbcbba6f90b3e9f16fc2adc3ad7a78d822a378ae7c759-merged.mount: Deactivated successfully.
Dec 04 10:32:30 compute-0 podman[234834]: 2025-12-04 10:32:30.034361557 +0000 UTC m=+0.715824521 container remove 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:32:30 compute-0 systemd[1]: libpod-conmon-7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846.scope: Deactivated successfully.
Dec 04 10:32:30 compute-0 ceph-mon[75358]: pgmap v649: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:30 compute-0 podman[235026]: 2025-12-04 10:32:30.201829283 +0000 UTC m=+0.043212778 container create d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:32:30 compute-0 systemd[1]: Started libpod-conmon-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope.
Dec 04 10:32:30 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:32:30 compute-0 podman[235026]: 2025-12-04 10:32:30.18317674 +0000 UTC m=+0.024560255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:32:30 compute-0 podman[235026]: 2025-12-04 10:32:30.285691832 +0000 UTC m=+0.127075337 container init d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:32:30 compute-0 podman[235026]: 2025-12-04 10:32:30.295176932 +0000 UTC m=+0.136560427 container start d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:32:30 compute-0 podman[235026]: 2025-12-04 10:32:30.29962678 +0000 UTC m=+0.141010305 container attach d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:32:30 compute-0 sudo[235119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbwniirwsgvtpsvqssvrxbmamrwkybie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844350.0393229-910-129651670139020/AnsiballZ_systemd_service.py'
Dec 04 10:32:30 compute-0 sudo[235119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:30 compute-0 python3.9[235121]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 04 10:32:30 compute-0 systemd[1]: Reloading.
Dec 04 10:32:30 compute-0 systemd-rc-local-generator[235189]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:32:30 compute-0 systemd-sysv-generator[235196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:32:31 compute-0 sudo[235119]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:31 compute-0 wizardly_yalow[235064]: {}
Dec 04 10:32:31 compute-0 lvm[235231]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:32:31 compute-0 lvm[235235]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:32:31 compute-0 lvm[235234]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:32:31 compute-0 lvm[235231]: VG ceph_vg0 finished
Dec 04 10:32:31 compute-0 lvm[235234]: VG ceph_vg1 finished
Dec 04 10:32:31 compute-0 lvm[235235]: VG ceph_vg2 finished
Dec 04 10:32:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:31 compute-0 systemd[1]: libpod-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope: Deactivated successfully.
Dec 04 10:32:31 compute-0 systemd[1]: libpod-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope: Consumed 1.402s CPU time.
Dec 04 10:32:31 compute-0 conmon[235064]: conmon d23de91045872c81c2e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope/container/memory.events
Dec 04 10:32:31 compute-0 podman[235026]: 2025-12-04 10:32:31.123323833 +0000 UTC m=+0.964707348 container died d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:32:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500-merged.mount: Deactivated successfully.
Dec 04 10:32:31 compute-0 podman[235026]: 2025-12-04 10:32:31.166470907 +0000 UTC m=+1.007854392 container remove d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:32:31 compute-0 systemd[1]: libpod-conmon-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope: Deactivated successfully.
Dec 04 10:32:31 compute-0 sudo[234726]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:32:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:32:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:32:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:32:31 compute-0 sudo[235271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:32:31 compute-0 sudo[235271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:32:31 compute-0 sudo[235271]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:31 compute-0 sudo[235421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezkihydxaypgwyakexokjyavklhkxyab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844351.2595532-918-36247482520755/AnsiballZ_command.py'
Dec 04 10:32:31 compute-0 sudo[235421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:31 compute-0 python3.9[235423]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:32:31 compute-0 sudo[235421]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:32 compute-0 ceph-mon[75358]: pgmap v650: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:32:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:32:32 compute-0 sudo[235574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqnvmfuazprfuqnpekjegxecmaxhorcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844351.9588203-918-245156358108729/AnsiballZ_command.py'
Dec 04 10:32:32 compute-0 sudo[235574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:32 compute-0 python3.9[235576]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:32:32 compute-0 sudo[235574]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:32 compute-0 sudo[235727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zopirxlafhwzmgcfvifymfornccyzwvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844352.548578-918-44077358064369/AnsiballZ_command.py'
Dec 04 10:32:32 compute-0 sudo[235727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:33 compute-0 python3.9[235729]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:32:33 compute-0 sudo[235727]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:33 compute-0 sudo[235880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjqvthmisyoagqkuuaecayanlbdqkmve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844353.2347193-918-123936239453993/AnsiballZ_command.py'
Dec 04 10:32:33 compute-0 sudo[235880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:33 compute-0 python3.9[235882]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:32:33 compute-0 sudo[235880]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:34 compute-0 sudo[236033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bujelqupknigdpymngppamkvoaayhift ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844353.885869-918-217272429170268/AnsiballZ_command.py'
Dec 04 10:32:34 compute-0 sudo[236033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:34 compute-0 ceph-mon[75358]: pgmap v651: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:34 compute-0 python3.9[236035]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:32:34 compute-0 sudo[236033]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:34 compute-0 sudo[236186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnrzcmdkmlfulsaywmqackngndiremui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844354.5068016-918-250290145053464/AnsiballZ_command.py'
Dec 04 10:32:34 compute-0 sudo[236186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:34 compute-0 python3.9[236188]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:32:35 compute-0 sudo[236186]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:35 compute-0 sudo[236339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbdrohofturdfgyzhdfbfumqwlwvlzmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844355.1635785-918-60308907619911/AnsiballZ_command.py'
Dec 04 10:32:35 compute-0 sudo[236339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:35 compute-0 python3.9[236341]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:32:35 compute-0 ceph-mon[75358]: pgmap v652: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:35 compute-0 sudo[236339]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:36 compute-0 sudo[236492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqwxteckvhocazhripcffnjoyohlrayf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844355.799351-918-227851304377326/AnsiballZ_command.py'
Dec 04 10:32:36 compute-0 sudo[236492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:36 compute-0 python3.9[236494]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 04 10:32:36 compute-0 sudo[236492]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:32:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:37 compute-0 sudo[236645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxfksjeaowfkmxuvatzivlyjefjayasy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844357.1208344-997-212545041399729/AnsiballZ_file.py'
Dec 04 10:32:37 compute-0 sudo[236645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:37 compute-0 python3.9[236647]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:37 compute-0 sudo[236645]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:38 compute-0 sudo[236797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbfufyelbdmzshqwuqdwykuxnydwbqqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844357.7771935-997-148920714441360/AnsiballZ_file.py'
Dec 04 10:32:38 compute-0 sudo[236797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:38 compute-0 ceph-mon[75358]: pgmap v653: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:38 compute-0 python3.9[236799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:38 compute-0 sudo[236797]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:38 compute-0 sudo[236949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yteuookayszleacbaxskazzxcqusacpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844358.3573813-997-142200191180228/AnsiballZ_file.py'
Dec 04 10:32:38 compute-0 sudo[236949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:38 compute-0 python3.9[236951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:38 compute-0 sudo[236949]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:39 compute-0 sudo[237103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paqfokktfonkarwmbfploseflhmfxhda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844359.0338264-1019-105456583913469/AnsiballZ_file.py'
Dec 04 10:32:39 compute-0 sudo[237103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:39 compute-0 python3.9[237105]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:39 compute-0 sudo[237103]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:39 compute-0 sudo[237255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiewjulqjvmdraaeyairuavzlnbzkslc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844359.6022832-1019-166014289658175/AnsiballZ_file.py'
Dec 04 10:32:39 compute-0 sudo[237255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:40 compute-0 python3.9[237257]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:40 compute-0 sudo[237255]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:40 compute-0 ceph-mon[75358]: pgmap v654: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:40 compute-0 sudo[237407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xevvfnwrxlvbdifoacghupbdudanvhsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844360.1876938-1019-125851815431109/AnsiballZ_file.py'
Dec 04 10:32:40 compute-0 sudo[237407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:40 compute-0 python3.9[237409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:40 compute-0 sudo[237407]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:41 compute-0 sudo[237559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyavglpbzkhhvvwkkiwyhtdpbfrsnazh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844360.9941866-1019-102369553696106/AnsiballZ_file.py'
Dec 04 10:32:41 compute-0 sudo[237559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:41 compute-0 python3.9[237561]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:41 compute-0 sudo[237559]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:41 compute-0 sudo[237711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqrzazxstjusmwairnkqaqkicnjgjhcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844361.598132-1019-27854648145887/AnsiballZ_file.py'
Dec 04 10:32:41 compute-0 sudo[237711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:42 compute-0 python3.9[237713]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:42 compute-0 sudo[237711]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:42 compute-0 ceph-mon[75358]: pgmap v655: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:43 compute-0 sudo[237863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtobdvwgjffatxxolothdwrsrbeagflj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844362.250274-1019-214951096967481/AnsiballZ_file.py'
Dec 04 10:32:43 compute-0 sudo[237863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:43 compute-0 python3.9[237865]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:43 compute-0 sudo[237863]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:43 compute-0 sudo[238015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpnivquqqtpzjchmalgtqekequilbxpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844363.6228218-1019-146956311818117/AnsiballZ_file.py'
Dec 04 10:32:43 compute-0 sudo[238015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:44 compute-0 python3.9[238017]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:44 compute-0 sudo[238015]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:44 compute-0 ceph-mon[75358]: pgmap v656: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:45 compute-0 sshd-session[236952]: Invalid user master from 101.47.163.20 port 38946
Dec 04 10:32:45 compute-0 sshd-session[236952]: Received disconnect from 101.47.163.20 port 38946:11: Bye Bye [preauth]
Dec 04 10:32:45 compute-0 sshd-session[236952]: Disconnected from invalid user master 101.47.163.20 port 38946 [preauth]
Dec 04 10:32:46 compute-0 ceph-mon[75358]: pgmap v657: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:47 compute-0 podman[238042]: 2025-12-04 10:32:47.962917416 +0000 UTC m=+0.061003758 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 04 10:32:48 compute-0 ceph-mon[75358]: pgmap v658: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:49 compute-0 sudo[238188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sppbpygsdeynkiihwgpwbnnakqfbhrqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844368.755128-1208-66908191390949/AnsiballZ_getent.py'
Dec 04 10:32:49 compute-0 sudo[238188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:49 compute-0 python3.9[238190]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 04 10:32:49 compute-0 sudo[238188]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:49 compute-0 sudo[238341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwlwlwafhegttpsmbrjtgowiwptzwvpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844369.4936295-1216-173144633495310/AnsiballZ_group.py'
Dec 04 10:32:49 compute-0 sudo[238341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:50 compute-0 python3.9[238343]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 04 10:32:50 compute-0 groupadd[238344]: group added to /etc/group: name=nova, GID=42436
Dec 04 10:32:50 compute-0 groupadd[238344]: group added to /etc/gshadow: name=nova
Dec 04 10:32:50 compute-0 groupadd[238344]: new group: name=nova, GID=42436
Dec 04 10:32:50 compute-0 sudo[238341]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:50 compute-0 ceph-mon[75358]: pgmap v659: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:50 compute-0 sudo[238499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzwqeuhtnnrvedbxqytgynczoacvxkcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844370.3535075-1224-124808847193161/AnsiballZ_user.py'
Dec 04 10:32:50 compute-0 sudo[238499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:50 compute-0 python3.9[238501]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 04 10:32:51 compute-0 useradd[238503]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 04 10:32:51 compute-0 useradd[238503]: add 'nova' to group 'libvirt'
Dec 04 10:32:51 compute-0 useradd[238503]: add 'nova' to shadow group 'libvirt'
Dec 04 10:32:51 compute-0 sudo[238499]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:51 compute-0 sshd-session[238534]: Accepted publickey for zuul from 192.168.122.30 port 36358 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:32:51 compute-0 systemd-logind[798]: New session 51 of user zuul.
Dec 04 10:32:51 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec 04 10:32:51 compute-0 sshd-session[238534]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:32:52 compute-0 sshd-session[238537]: Received disconnect from 192.168.122.30 port 36358:11: disconnected by user
Dec 04 10:32:52 compute-0 sshd-session[238537]: Disconnected from user zuul 192.168.122.30 port 36358
Dec 04 10:32:52 compute-0 sshd-session[238534]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:32:52 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec 04 10:32:52 compute-0 systemd-logind[798]: Session 51 logged out. Waiting for processes to exit.
Dec 04 10:32:52 compute-0 systemd-logind[798]: Removed session 51.
Dec 04 10:32:52 compute-0 ceph-mon[75358]: pgmap v660: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:52 compute-0 python3.9[238687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:32:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:53 compute-0 python3.9[238808]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844372.2623973-1249-186420345472586/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:53 compute-0 python3.9[238958]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:32:54 compute-0 python3.9[239034]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:54 compute-0 ceph-mon[75358]: pgmap v661: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:54 compute-0 python3.9[239184]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:32:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:32:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:32:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:32:54.899 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:32:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:32:54.899 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:32:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:55 compute-0 python3.9[239305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844374.3197238-1249-233661201797640/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:55 compute-0 ceph-mon[75358]: pgmap v662: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:55 compute-0 python3.9[239455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:32:56 compute-0 podman[239550]: 2025-12-04 10:32:56.32718519 +0000 UTC m=+0.104329767 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 04 10:32:56 compute-0 python3.9[239591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844375.397189-1249-109819128706089/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:56 compute-0 python3.9[239751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:32:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:57 compute-0 python3.9[239872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844376.5887334-1249-131569728837859/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:32:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:32:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:32:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:32:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:32:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:32:57 compute-0 podman[239996]: 2025-12-04 10:32:57.942299074 +0000 UTC m=+0.051402196 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 04 10:32:58 compute-0 python3.9[240035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:32:58 compute-0 ceph-mon[75358]: pgmap v663: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:58 compute-0 python3.9[240162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844377.6326604-1249-98288261275346/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:32:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:32:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:32:59 compute-0 sudo[240312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctxzphxylazpmxfokyfbharcdnkfisvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844378.9234407-1332-233233806363686/AnsiballZ_file.py'
Dec 04 10:32:59 compute-0 sudo[240312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:32:59 compute-0 python3.9[240314]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:32:59 compute-0 sudo[240312]: pam_unix(sudo:session): session closed for user root
Dec 04 10:32:59 compute-0 sudo[240464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxwcgceqvnyrmngorhqufjhcothsswli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844379.6404161-1340-105103288111260/AnsiballZ_copy.py'
Dec 04 10:32:59 compute-0 sudo[240464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:00 compute-0 python3.9[240466]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:33:00 compute-0 sudo[240464]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:00 compute-0 ceph-mon[75358]: pgmap v664: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:00 compute-0 sudo[240616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odsgyecoubsdgpajdpdlwpbnwixuocau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844380.26333-1348-244958179567571/AnsiballZ_stat.py'
Dec 04 10:33:00 compute-0 sudo[240616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:00 compute-0 python3.9[240618]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:33:00 compute-0 sudo[240616]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:01 compute-0 sudo[240768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mukyuadzbyksgelkurpgvsegvtadsluw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844380.891467-1356-53973366858985/AnsiballZ_stat.py'
Dec 04 10:33:01 compute-0 sudo[240768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:01 compute-0 ceph-mon[75358]: pgmap v665: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:01 compute-0 python3.9[240770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:33:01 compute-0 sudo[240768]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:01 compute-0 sudo[240891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiupszoieuvgspdzhrveabeypuxhthdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844380.891467-1356-53973366858985/AnsiballZ_copy.py'
Dec 04 10:33:01 compute-0 sudo[240891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:01 compute-0 python3.9[240893]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764844380.891467-1356-53973366858985/.source _original_basename=.i55ys04b follow=False checksum=ab11a89d1b6d7fe91e220a46fbf2bb5f52f68c89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 04 10:33:01 compute-0 sudo[240891]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:02 compute-0 python3.9[241046]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:33:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:03 compute-0 python3.9[241199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:33:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:03 compute-0 python3.9[241320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844382.9344735-1382-37164165681981/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:33:04 compute-0 ceph-mon[75358]: pgmap v666: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:04 compute-0 python3.9[241470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 04 10:33:04 compute-0 sshd-session[240972]: Invalid user support from 45.135.232.92 port 41936
Dec 04 10:33:05 compute-0 python3.9[241591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844384.067138-1397-44641461593624/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 04 10:33:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:05 compute-0 sshd-session[240972]: Connection reset by invalid user support 45.135.232.92 port 41936 [preauth]
Dec 04 10:33:05 compute-0 sudo[241742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypojmgurnokamudmzdnephfyjbkllfzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844385.361703-1414-12508453336940/AnsiballZ_container_config_data.py'
Dec 04 10:33:05 compute-0 sudo[241742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:05 compute-0 python3.9[241744]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 04 10:33:05 compute-0 sudo[241742]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:06 compute-0 ceph-mon[75358]: pgmap v667: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:06 compute-0 sudo[241895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnvodxuaapggxkchasyllgrybycslfcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844386.0440302-1423-191947354023074/AnsiballZ_container_config_hash.py'
Dec 04 10:33:06 compute-0 sudo[241895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:06 compute-0 python3.9[241897]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 04 10:33:06 compute-0 sudo[241895]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:07 compute-0 sudo[242047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roiufmafkhckabytvajwqirtuxoazomd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764844386.785177-1433-190564586766089/AnsiballZ_edpm_container_manage.py'
Dec 04 10:33:07 compute-0 sudo[242047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:07 compute-0 python3[242049]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 04 10:33:07 compute-0 sshd-session[241636]: Connection reset by authenticating user root 45.135.232.92 port 46406 [preauth]
Dec 04 10:33:08 compute-0 ceph-mon[75358]: pgmap v668: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:09 compute-0 sshd-session[242078]: Invalid user abc from 45.135.232.92 port 46412
Dec 04 10:33:09 compute-0 sshd-session[242078]: Connection reset by invalid user abc 45.135.232.92 port 46412 [preauth]
Dec 04 10:33:10 compute-0 ceph-mon[75358]: pgmap v669: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:11 compute-0 ceph-mon[75358]: pgmap v670: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:12 compute-0 sshd-session[242106]: Invalid user ubnt from 45.135.232.92 port 46428
Dec 04 10:33:12 compute-0 sshd-session[242106]: Connection reset by invalid user ubnt 45.135.232.92 port 46428 [preauth]
Dec 04 10:33:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:13 compute-0 ceph-mon[75358]: pgmap v671: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:15 compute-0 sshd-session[242130]: Invalid user ubnt from 45.135.232.92 port 46440
Dec 04 10:33:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:15 compute-0 sshd-session[242130]: Connection reset by invalid user ubnt 45.135.232.92 port 46440 [preauth]
Dec 04 10:33:16 compute-0 sshd-session[242148]: Invalid user intell from 103.149.86.230 port 36628
Dec 04 10:33:16 compute-0 sshd-session[242150]: Invalid user mega from 74.249.218.27 port 35828
Dec 04 10:33:16 compute-0 sshd-session[242150]: Received disconnect from 74.249.218.27 port 35828:11: Bye Bye [preauth]
Dec 04 10:33:16 compute-0 sshd-session[242150]: Disconnected from invalid user mega 74.249.218.27 port 35828 [preauth]
Dec 04 10:33:17 compute-0 sshd-session[242148]: Received disconnect from 103.149.86.230 port 36628:11: Bye Bye [preauth]
Dec 04 10:33:17 compute-0 sshd-session[242148]: Disconnected from invalid user intell 103.149.86.230 port 36628 [preauth]
Dec 04 10:33:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:17 compute-0 ceph-mon[75358]: pgmap v672: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:17 compute-0 podman[242064]: 2025-12-04 10:33:17.757615696 +0000 UTC m=+10.411336577 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 04 10:33:17 compute-0 podman[242174]: 2025-12-04 10:33:17.959135905 +0000 UTC m=+0.072844685 container create f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 04 10:33:17 compute-0 podman[242174]: 2025-12-04 10:33:17.922133539 +0000 UTC m=+0.035842399 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 04 10:33:17 compute-0 python3[242049]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 04 10:33:18 compute-0 sudo[242047]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:18 compute-0 sudo[242371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfwvkwqrxmnzfqhspmdyfadjmtpckoav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844398.247879-1441-88677160713575/AnsiballZ_stat.py'
Dec 04 10:33:18 compute-0 sudo[242371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:18 compute-0 podman[242335]: 2025-12-04 10:33:18.579508536 +0000 UTC m=+0.066599294 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 04 10:33:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:18 compute-0 ceph-mon[75358]: pgmap v673: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:18 compute-0 python3.9[242377]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:33:18 compute-0 sudo[242371]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:19 compute-0 sudo[242533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trrwmyxygwclfdvfjkapotfzafqaejpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844399.1265624-1453-62157508824289/AnsiballZ_container_config_data.py'
Dec 04 10:33:19 compute-0 sudo[242533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:19 compute-0 python3.9[242535]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 04 10:33:19 compute-0 sudo[242533]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:19 compute-0 ceph-mon[75358]: pgmap v674: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:20 compute-0 sudo[242685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hosupeurptohblmptrwfphcsvnegdvuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844399.8015218-1462-211169348293586/AnsiballZ_container_config_hash.py'
Dec 04 10:33:20 compute-0 sudo[242685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:20 compute-0 python3.9[242687]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 04 10:33:20 compute-0 sudo[242685]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:20 compute-0 sudo[242837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbziwlpdzrlshsutpcqwesdxvjtdcvwx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764844400.4989665-1472-128968490731527/AnsiballZ_edpm_container_manage.py'
Dec 04 10:33:20 compute-0 sudo[242837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:21 compute-0 python3[242839]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 04 10:33:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:21 compute-0 podman[242877]: 2025-12-04 10:33:21.240421311 +0000 UTC m=+0.024738461 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 04 10:33:21 compute-0 podman[242877]: 2025-12-04 10:33:21.359073554 +0000 UTC m=+0.143390674 container create f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 04 10:33:21 compute-0 python3[242839]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 04 10:33:21 compute-0 ceph-mon[75358]: pgmap v675: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:21 compute-0 sudo[242837]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:21 compute-0 sudo[243065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esxcimnptzagagdhnhgnhbotzftdokla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844401.7174203-1480-253213411224749/AnsiballZ_stat.py'
Dec 04 10:33:21 compute-0 sudo[243065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:22 compute-0 python3.9[243067]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:33:22 compute-0 sudo[243065]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:22 compute-0 sudo[243219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdwwccplgkfwomzewtftzpuxhmddirqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844402.389445-1489-219191535584243/AnsiballZ_file.py'
Dec 04 10:33:22 compute-0 sudo[243219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:22 compute-0 python3.9[243221]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:33:22 compute-0 sudo[243219]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:23 compute-0 sudo[243370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qddefouyselmqofptbxcznwwrrkoxpyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844402.8842034-1489-222912326697785/AnsiballZ_copy.py'
Dec 04 10:33:23 compute-0 sudo[243370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:23 compute-0 python3.9[243372]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764844402.8842034-1489-222912326697785/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 04 10:33:23 compute-0 sudo[243370]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:23 compute-0 sudo[243446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocmxahtqxtejmlqphdgnfujycocotnoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844402.8842034-1489-222912326697785/AnsiballZ_systemd.py'
Dec 04 10:33:23 compute-0 sudo[243446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:23 compute-0 python3.9[243448]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 04 10:33:23 compute-0 systemd[1]: Reloading.
Dec 04 10:33:24 compute-0 systemd-rc-local-generator[243474]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:33:24 compute-0 systemd-sysv-generator[243477]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:33:24 compute-0 ceph-mon[75358]: pgmap v676: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:24 compute-0 sudo[243446]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:24 compute-0 sudo[243557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxjgobzzifptnejtgxtlvipiwvjdtkbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844402.8842034-1489-222912326697785/AnsiballZ_systemd.py'
Dec 04 10:33:24 compute-0 sudo[243557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:24 compute-0 python3.9[243559]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 04 10:33:24 compute-0 systemd[1]: Reloading.
Dec 04 10:33:25 compute-0 systemd-rc-local-generator[243588]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 04 10:33:25 compute-0 systemd-sysv-generator[243592]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 04 10:33:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:25 compute-0 systemd[1]: Starting nova_compute container...
Dec 04 10:33:25 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:25 compute-0 podman[243598]: 2025-12-04 10:33:25.428284746 +0000 UTC m=+0.096649951 container init f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:33:25 compute-0 podman[243598]: 2025-12-04 10:33:25.434825395 +0000 UTC m=+0.103190560 container start f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:33:25 compute-0 podman[243598]: nova_compute
Dec 04 10:33:25 compute-0 nova_compute[243612]: + sudo -E kolla_set_configs
Dec 04 10:33:25 compute-0 systemd[1]: Started nova_compute container.
Dec 04 10:33:25 compute-0 sudo[243557]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Validating config file
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying service configuration files
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Deleting /etc/ceph
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Creating directory /etc/ceph
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Writing out command to execute
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 04 10:33:25 compute-0 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 04 10:33:25 compute-0 nova_compute[243612]: ++ cat /run_command
Dec 04 10:33:25 compute-0 nova_compute[243612]: + CMD=nova-compute
Dec 04 10:33:25 compute-0 nova_compute[243612]: + ARGS=
Dec 04 10:33:25 compute-0 nova_compute[243612]: + sudo kolla_copy_cacerts
Dec 04 10:33:25 compute-0 nova_compute[243612]: + [[ ! -n '' ]]
Dec 04 10:33:25 compute-0 nova_compute[243612]: + . kolla_extend_start
Dec 04 10:33:25 compute-0 nova_compute[243612]: + echo 'Running command: '\''nova-compute'\'''
Dec 04 10:33:25 compute-0 nova_compute[243612]: Running command: 'nova-compute'
Dec 04 10:33:25 compute-0 nova_compute[243612]: + umask 0022
Dec 04 10:33:25 compute-0 nova_compute[243612]: + exec nova-compute
Dec 04 10:33:26 compute-0 ceph-mon[75358]: pgmap v677: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:26 compute-0 python3.9[243773]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:33:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:33:26
Dec 04 10:33:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:33:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:33:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', '.mgr']
Dec 04 10:33:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:33:26 compute-0 podman[243898]: 2025-12-04 10:33:26.964386557 +0000 UTC m=+0.084223070 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:33:27 compute-0 python3.9[243941]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:33:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:27 compute-0 python3.9[244101]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 04 10:33:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:33:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:33:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:33:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:33:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:33:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:33:28 compute-0 nova_compute[243612]: 2025-12-04 10:33:28.026 243616 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 04 10:33:28 compute-0 nova_compute[243612]: 2025-12-04 10:33:28.026 243616 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 04 10:33:28 compute-0 nova_compute[243612]: 2025-12-04 10:33:28.026 243616 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 04 10:33:28 compute-0 nova_compute[243612]: 2025-12-04 10:33:28.027 243616 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:33:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:33:28 compute-0 ceph-mon[75358]: pgmap v678: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:28 compute-0 nova_compute[243612]: 2025-12-04 10:33:28.240 243616 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:33:28 compute-0 nova_compute[243612]: 2025-12-04 10:33:28.260 243616 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:33:28 compute-0 nova_compute[243612]: 2025-12-04 10:33:28.261 243616 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 04 10:33:28 compute-0 sudo[244263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxpyjxiyjhwgysjhuebtiwiulxpwxtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844408.0007393-1549-150780453794183/AnsiballZ_podman_container.py'
Dec 04 10:33:28 compute-0 sudo[244263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:28 compute-0 podman[244229]: 2025-12-04 10:33:28.589597007 +0000 UTC m=+0.061733996 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 04 10:33:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:28 compute-0 python3.9[244276]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 04 10:33:28 compute-0 nova_compute[243612]: 2025-12-04 10:33:28.863 243616 INFO nova.virt.driver [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 04 10:33:28 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:33:28 compute-0 sudo[244263]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.001 243616 INFO nova.compute.provider_config [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.018 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.018 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.027 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.027 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.079 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.079 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.079 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.094 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.094 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.094 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.094 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.097 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.097 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.097 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.097 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.098 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.098 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.098 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.098 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.103 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.103 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.103 243616 WARNING oslo_config.cfg [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 04 10:33:29 compute-0 nova_compute[243612]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 04 10:33:29 compute-0 nova_compute[243612]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 04 10:33:29 compute-0 nova_compute[243612]: and ``live_migration_inbound_addr`` respectively.
Dec 04 10:33:29 compute-0 nova_compute[243612]: ).  Its value may be silently ignored in the future.
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.103 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.106 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.106 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.106 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.107 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.107 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.108 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.108 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.108 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_secret_uuid        = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.108 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.110 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.110 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.110 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.110 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.118 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.118 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.118 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.118 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.129 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.129 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.129 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.129 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.155 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.155 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.155 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.155 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.178 243616 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.195 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.197 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.197 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.197 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 04 10:33:29 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 04 10:33:29 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.279 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5ce1468fa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.283 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5ce1468fa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.284 243616 INFO nova.virt.libvirt.driver [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Connection event '1' reason 'None'
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.300 243616 WARNING nova.virt.libvirt.driver [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.300 243616 DEBUG nova.virt.libvirt.volume.mount [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 04 10:33:29 compute-0 ceph-mon[75358]: pgmap v679: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:29 compute-0 sudo[244503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lncqceeiszsssioahwnrwdtglgqbsrot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844409.1636498-1557-64588856416327/AnsiballZ_systemd.py'
Dec 04 10:33:29 compute-0 sudo[244503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:29 compute-0 python3.9[244505]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 04 10:33:29 compute-0 systemd[1]: Stopping nova_compute container...
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.907 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.907 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 04 10:33:29 compute-0 nova_compute[243612]: 2025-12-04 10:33:29.907 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 04 10:33:30 compute-0 virtqemud[244380]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 04 10:33:30 compute-0 virtqemud[244380]: hostname: compute-0
Dec 04 10:33:30 compute-0 virtqemud[244380]: End of file while reading data: Input/output error
Dec 04 10:33:30 compute-0 systemd[1]: libpod-f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8.scope: Deactivated successfully.
Dec 04 10:33:30 compute-0 systemd[1]: libpod-f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8.scope: Consumed 3.534s CPU time.
Dec 04 10:33:30 compute-0 podman[244511]: 2025-12-04 10:33:30.673570753 +0000 UTC m=+0.810115955 container died f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 04 10:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8-userdata-shm.mount: Deactivated successfully.
Dec 04 10:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f-merged.mount: Deactivated successfully.
Dec 04 10:33:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:31 compute-0 sudo[244548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:33:31 compute-0 sudo[244548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:33:31 compute-0 sudo[244548]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:31 compute-0 sudo[244573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:33:31 compute-0 sudo[244573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:33:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:33 compute-0 podman[244511]: 2025-12-04 10:33:33.551849742 +0000 UTC m=+3.688394924 container cleanup f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 04 10:33:33 compute-0 podman[244511]: nova_compute
Dec 04 10:33:33 compute-0 ceph-mon[75358]: pgmap v680: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:33 compute-0 ceph-mon[75358]: pgmap v681: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:33 compute-0 podman[244616]: nova_compute
Dec 04 10:33:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:33 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 04 10:33:33 compute-0 systemd[1]: Stopped nova_compute container.
Dec 04 10:33:33 compute-0 systemd[1]: Starting nova_compute container...
Dec 04 10:33:33 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:33 compute-0 podman[244627]: 2025-12-04 10:33:33.729635676 +0000 UTC m=+0.084269561 container init f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:33:33 compute-0 podman[244627]: 2025-12-04 10:33:33.737272411 +0000 UTC m=+0.091906276 container start f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0)
Dec 04 10:33:33 compute-0 podman[244627]: nova_compute
Dec 04 10:33:33 compute-0 nova_compute[244644]: + sudo -E kolla_set_configs
Dec 04 10:33:33 compute-0 systemd[1]: Started nova_compute container.
Dec 04 10:33:33 compute-0 sudo[244503]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Validating config file
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying service configuration files
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /etc/ceph
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Creating directory /etc/ceph
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Writing out command to execute
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 04 10:33:33 compute-0 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 04 10:33:33 compute-0 nova_compute[244644]: ++ cat /run_command
Dec 04 10:33:33 compute-0 nova_compute[244644]: + CMD=nova-compute
Dec 04 10:33:33 compute-0 nova_compute[244644]: + ARGS=
Dec 04 10:33:33 compute-0 nova_compute[244644]: + sudo kolla_copy_cacerts
Dec 04 10:33:33 compute-0 sudo[244573]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:33 compute-0 nova_compute[244644]: + [[ ! -n '' ]]
Dec 04 10:33:33 compute-0 nova_compute[244644]: + . kolla_extend_start
Dec 04 10:33:33 compute-0 nova_compute[244644]: + echo 'Running command: '\''nova-compute'\'''
Dec 04 10:33:33 compute-0 nova_compute[244644]: Running command: 'nova-compute'
Dec 04 10:33:33 compute-0 nova_compute[244644]: + umask 0022
Dec 04 10:33:33 compute-0 nova_compute[244644]: + exec nova-compute
Dec 04 10:33:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:33:33 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:33:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:33:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:33:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:33:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:33:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:33:33 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:33:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:33:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:33:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:33:33 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:33:33 compute-0 sudo[244694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:33:33 compute-0 sudo[244694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:33:33 compute-0 sudo[244694]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:33 compute-0 sudo[244719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:33:33 compute-0 sudo[244719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:33:34 compute-0 sudo[244884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnhwcnzhcvfgqotihutjeeercegaszpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764844413.990936-1566-154322429129193/AnsiballZ_podman_container.py'
Dec 04 10:33:34 compute-0 sudo[244884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:33:34 compute-0 podman[244880]: 2025-12-04 10:33:34.275498782 +0000 UTC m=+0.043565305 container create e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:33:34 compute-0 systemd[1]: Started libpod-conmon-e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6.scope.
Dec 04 10:33:34 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:33:34 compute-0 podman[244880]: 2025-12-04 10:33:34.348280525 +0000 UTC m=+0.116347048 container init e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:33:34 compute-0 podman[244880]: 2025-12-04 10:33:34.255201951 +0000 UTC m=+0.023268484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:33:34 compute-0 podman[244880]: 2025-12-04 10:33:34.355380177 +0000 UTC m=+0.123446700 container start e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:33:34 compute-0 podman[244880]: 2025-12-04 10:33:34.358943772 +0000 UTC m=+0.127010315 container attach e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 04 10:33:34 compute-0 elegant_cannon[244901]: 167 167
Dec 04 10:33:34 compute-0 systemd[1]: libpod-e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6.scope: Deactivated successfully.
Dec 04 10:33:34 compute-0 podman[244880]: 2025-12-04 10:33:34.362086879 +0000 UTC m=+0.130153402 container died e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 04 10:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6906da004e5d977156cfa1225defe4eb03b150969a8df1533dcdd49cb3947545-merged.mount: Deactivated successfully.
Dec 04 10:33:34 compute-0 podman[244880]: 2025-12-04 10:33:34.422023789 +0000 UTC m=+0.190090312 container remove e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:33:34 compute-0 systemd[1]: libpod-conmon-e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6.scope: Deactivated successfully.
Dec 04 10:33:34 compute-0 python3.9[244898]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 04 10:33:34 compute-0 podman[244928]: 2025-12-04 10:33:34.553492303 +0000 UTC m=+0.020612440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:33:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:33:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:33:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:33:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:33:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:33:34 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:33:34 compute-0 podman[244928]: 2025-12-04 10:33:34.712217806 +0000 UTC m=+0.179337923 container create e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:33:34 compute-0 systemd[1]: Started libpod-conmon-e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721.scope.
Dec 04 10:33:34 compute-0 systemd[1]: Started libpod-conmon-f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa.scope.
Dec 04 10:33:34 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:34 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc933a60d49c08d62a32b55186d2743326b4968e9c21d2beb08d1cb1bb478c3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc933a60d49c08d62a32b55186d2743326b4968e9c21d2beb08d1cb1bb478c3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc933a60d49c08d62a32b55186d2743326b4968e9c21d2beb08d1cb1bb478c3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:34 compute-0 podman[244965]: 2025-12-04 10:33:34.819435162 +0000 UTC m=+0.218141062 container init f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 04 10:33:34 compute-0 podman[244928]: 2025-12-04 10:33:34.825428758 +0000 UTC m=+0.292548895 container init e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 04 10:33:34 compute-0 podman[244965]: 2025-12-04 10:33:34.829493926 +0000 UTC m=+0.228199816 container start f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 04 10:33:34 compute-0 podman[244928]: 2025-12-04 10:33:34.835409129 +0000 UTC m=+0.302529246 container start e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:33:34 compute-0 python3.9[244898]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 04 10:33:34 compute-0 podman[244928]: 2025-12-04 10:33:34.844220442 +0000 UTC m=+0.311340579 container attach e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Applying nova statedir ownership
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 04 10:33:34 compute-0 nova_compute_init[244992]: INFO:nova_statedir:Nova statedir ownership complete
Dec 04 10:33:34 compute-0 systemd[1]: libpod-f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa.scope: Deactivated successfully.
Dec 04 10:33:34 compute-0 podman[245004]: 2025-12-04 10:33:34.924705061 +0000 UTC m=+0.024635778 container died f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa-userdata-shm.mount: Deactivated successfully.
Dec 04 10:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bc933a60d49c08d62a32b55186d2743326b4968e9c21d2beb08d1cb1bb478c3-merged.mount: Deactivated successfully.
Dec 04 10:33:34 compute-0 podman[245004]: 2025-12-04 10:33:34.964874813 +0000 UTC m=+0.064805510 container cleanup f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 04 10:33:34 compute-0 sudo[244884]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:34 compute-0 systemd[1]: libpod-conmon-f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa.scope: Deactivated successfully.
Dec 04 10:33:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:35 compute-0 awesome_napier[244979]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:33:35 compute-0 awesome_napier[244979]: --> All data devices are unavailable
Dec 04 10:33:35 compute-0 systemd[1]: libpod-e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721.scope: Deactivated successfully.
Dec 04 10:33:35 compute-0 podman[244928]: 2025-12-04 10:33:35.34182776 +0000 UTC m=+0.808947887 container died e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:33:35 compute-0 podman[244928]: 2025-12-04 10:33:35.386251895 +0000 UTC m=+0.853372012 container remove e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:33:35 compute-0 systemd[1]: libpod-conmon-e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721.scope: Deactivated successfully.
Dec 04 10:33:35 compute-0 sudo[244719]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:35 compute-0 sshd-session[215093]: Connection closed by 192.168.122.30 port 49164
Dec 04 10:33:35 compute-0 sshd-session[215090]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:33:35 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 04 10:33:35 compute-0 systemd[1]: session-50.scope: Consumed 2min 18.462s CPU time.
Dec 04 10:33:35 compute-0 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Dec 04 10:33:35 compute-0 systemd-logind[798]: Removed session 50.
Dec 04 10:33:35 compute-0 sudo[245080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:33:35 compute-0 sudo[245080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:33:35 compute-0 sudo[245080]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:35 compute-0 sudo[245105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:33:35 compute-0 sudo[245105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:33:35 compute-0 ceph-mon[75358]: pgmap v682: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906-merged.mount: Deactivated successfully.
Dec 04 10:33:35 compute-0 podman[245142]: 2025-12-04 10:33:35.846048808 +0000 UTC m=+0.045164385 container create 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:33:35 compute-0 systemd[1]: Started libpod-conmon-80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08.scope.
Dec 04 10:33:35 compute-0 podman[245142]: 2025-12-04 10:33:35.825404228 +0000 UTC m=+0.024519825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:33:35 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:33:35 compute-0 podman[245142]: 2025-12-04 10:33:35.944800859 +0000 UTC m=+0.143916466 container init 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:33:35 compute-0 podman[245142]: 2025-12-04 10:33:35.95269881 +0000 UTC m=+0.151814387 container start 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:33:35 compute-0 podman[245142]: 2025-12-04 10:33:35.955902608 +0000 UTC m=+0.155018185 container attach 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:33:35 compute-0 gracious_agnesi[245158]: 167 167
Dec 04 10:33:35 compute-0 systemd[1]: libpod-80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08.scope: Deactivated successfully.
Dec 04 10:33:35 compute-0 podman[245142]: 2025-12-04 10:33:35.96054268 +0000 UTC m=+0.159658277 container died 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2878082f4ecc3790610b4c791de23920baadb311236623b8e12bc744a6acd77-merged.mount: Deactivated successfully.
Dec 04 10:33:36 compute-0 podman[245142]: 2025-12-04 10:33:36.012559999 +0000 UTC m=+0.211675576 container remove 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 04 10:33:36 compute-0 systemd[1]: libpod-conmon-80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08.scope: Deactivated successfully.
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.094 244650 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.095 244650 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.095 244650 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.095 244650 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 04 10:33:36 compute-0 podman[245183]: 2025-12-04 10:33:36.197721963 +0000 UTC m=+0.057331829 container create 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:33:36 compute-0 systemd[1]: Started libpod-conmon-5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654.scope.
Dec 04 10:33:36 compute-0 podman[245183]: 2025-12-04 10:33:36.16378046 +0000 UTC m=+0.023390316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:33:36 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:36 compute-0 podman[245183]: 2025-12-04 10:33:36.30623069 +0000 UTC m=+0.165840546 container init 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.309 244650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:33:36 compute-0 podman[245183]: 2025-12-04 10:33:36.314898199 +0000 UTC m=+0.174508035 container start 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec 04 10:33:36 compute-0 podman[245183]: 2025-12-04 10:33:36.318781833 +0000 UTC m=+0.178391699 container attach 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.327 244650 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.328 244650 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 04 10:33:36 compute-0 reverent_mclean[245200]: {
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:     "0": [
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:         {
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "devices": [
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "/dev/loop3"
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             ],
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_name": "ceph_lv0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_size": "21470642176",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "name": "ceph_lv0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "tags": {
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.cluster_name": "ceph",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.crush_device_class": "",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.encrypted": "0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.objectstore": "bluestore",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.osd_id": "0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.type": "block",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.vdo": "0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.with_tpm": "0"
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             },
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "type": "block",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "vg_name": "ceph_vg0"
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:         }
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:     ],
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:     "1": [
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:         {
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "devices": [
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "/dev/loop4"
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             ],
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_name": "ceph_lv1",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_size": "21470642176",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "name": "ceph_lv1",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "tags": {
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.cluster_name": "ceph",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.crush_device_class": "",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.encrypted": "0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.objectstore": "bluestore",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.osd_id": "1",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.type": "block",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.vdo": "0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.with_tpm": "0"
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             },
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "type": "block",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "vg_name": "ceph_vg1"
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:         }
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:     ],
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:     "2": [
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:         {
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "devices": [
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "/dev/loop5"
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             ],
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_name": "ceph_lv2",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_size": "21470642176",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "name": "ceph_lv2",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "tags": {
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.cluster_name": "ceph",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.crush_device_class": "",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.encrypted": "0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.objectstore": "bluestore",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.osd_id": "2",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.type": "block",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.vdo": "0",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:                 "ceph.with_tpm": "0"
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             },
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "type": "block",
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:             "vg_name": "ceph_vg2"
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:         }
Dec 04 10:33:36 compute-0 reverent_mclean[245200]:     ]
Dec 04 10:33:36 compute-0 reverent_mclean[245200]: }
Dec 04 10:33:36 compute-0 systemd[1]: libpod-5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654.scope: Deactivated successfully.
Dec 04 10:33:36 compute-0 podman[245183]: 2025-12-04 10:33:36.666770859 +0000 UTC m=+0.526380705 container died 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:33:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f-merged.mount: Deactivated successfully.
Dec 04 10:33:36 compute-0 podman[245183]: 2025-12-04 10:33:36.773677617 +0000 UTC m=+0.633287493 container remove 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.794 244650 INFO nova.virt.driver [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 04 10:33:36 compute-0 systemd[1]: libpod-conmon-5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654.scope: Deactivated successfully.
Dec 04 10:33:36 compute-0 sudo[245105]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.925 244650 INFO nova.compute.provider_config [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 04 10:33:36 compute-0 sudo[245223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:33:36 compute-0 sudo[245223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.942 244650 DEBUG oslo_concurrency.lockutils [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.942 244650 DEBUG oslo_concurrency.lockutils [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.942 244650 DEBUG oslo_concurrency.lockutils [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 04 10:33:36 compute-0 sudo[245223]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.950 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.950 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.950 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:36 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 sudo[245248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 sudo[245248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.023 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.023 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.023 244650 WARNING oslo_config.cfg [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 04 10:33:37 compute-0 nova_compute[244644]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 04 10:33:37 compute-0 nova_compute[244644]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 04 10:33:37 compute-0 nova_compute[244644]: and ``live_migration_inbound_addr`` respectively.
Dec 04 10:33:37 compute-0 nova_compute[244644]: ).  Its value may be silently ignored in the future.
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.024 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.024 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.025 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.025 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.025 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.025 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.028 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.028 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.028 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_secret_uuid        = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.028 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.031 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.031 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.031 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.031 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.035 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.035 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.035 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.035 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.038 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.038 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.038 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.038 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.039 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.039 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.039 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.039 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.047 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.047 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.047 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.047 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.053 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.053 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.053 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.053 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.054 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.054 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.054 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.054 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.057 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.057 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.057 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.057 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.061 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.061 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.061 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.061 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.062 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.063 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.063 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.064 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.064 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.064 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.064 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.067 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.067 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.067 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.067 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.101 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.101 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.101 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.101 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.106 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.108 244650 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.126 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.127 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.127 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.127 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 04 10:33:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.145 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f31cd7af250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.148 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f31cd7af250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.149 244650 INFO nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Connection event '1' reason 'None'
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.156 244650 INFO nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host capabilities <capabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]: 
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <host>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <uuid>1f0bfa2d-c922-4848-973a-776654e5dc59</uuid>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <arch>x86_64</arch>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model>EPYC-Rome-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <vendor>AMD</vendor>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <microcode version='16777317'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <signature family='23' model='49' stepping='0'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='x2apic'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='tsc-deadline'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='osxsave'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='hypervisor'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='tsc_adjust'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='spec-ctrl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='stibp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='arch-capabilities'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='cmp_legacy'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='topoext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='virt-ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='lbrv'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='tsc-scale'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='vmcb-clean'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='pause-filter'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='pfthreshold'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='svme-addr-chk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='rdctl-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='skip-l1dfl-vmentry'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='mds-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature name='pschange-mc-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <pages unit='KiB' size='4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <pages unit='KiB' size='2048'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <pages unit='KiB' size='1048576'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <power_management>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <suspend_mem/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </power_management>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <iommu support='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <migration_features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <live/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <uri_transports>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <uri_transport>tcp</uri_transport>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <uri_transport>rdma</uri_transport>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </uri_transports>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </migration_features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <topology>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <cells num='1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <cell id='0'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:           <memory unit='KiB'>7864320</memory>
Dec 04 10:33:37 compute-0 nova_compute[244644]:           <pages unit='KiB' size='4'>1966080</pages>
Dec 04 10:33:37 compute-0 nova_compute[244644]:           <pages unit='KiB' size='2048'>0</pages>
Dec 04 10:33:37 compute-0 nova_compute[244644]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 04 10:33:37 compute-0 nova_compute[244644]:           <distances>
Dec 04 10:33:37 compute-0 nova_compute[244644]:             <sibling id='0' value='10'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:           </distances>
Dec 04 10:33:37 compute-0 nova_compute[244644]:           <cpus num='8'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:           </cpus>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         </cell>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </cells>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </topology>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <cache>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </cache>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <secmodel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model>selinux</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <doi>0</doi>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </secmodel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <secmodel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model>dac</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <doi>0</doi>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </secmodel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </host>
Dec 04 10:33:37 compute-0 nova_compute[244644]: 
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <guest>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <os_type>hvm</os_type>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <arch name='i686'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <wordsize>32</wordsize>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <domain type='qemu'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <domain type='kvm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </arch>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <pae/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <nonpae/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <acpi default='on' toggle='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <apic default='on' toggle='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <cpuselection/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <deviceboot/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <disksnapshot default='on' toggle='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <externalSnapshot/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </guest>
Dec 04 10:33:37 compute-0 nova_compute[244644]: 
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <guest>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <os_type>hvm</os_type>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <arch name='x86_64'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <wordsize>64</wordsize>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <domain type='qemu'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <domain type='kvm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </arch>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <acpi default='on' toggle='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <apic default='on' toggle='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <cpuselection/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <deviceboot/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <disksnapshot default='on' toggle='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <externalSnapshot/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </guest>
Dec 04 10:33:37 compute-0 nova_compute[244644]: 
Dec 04 10:33:37 compute-0 nova_compute[244644]: </capabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]: 
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.160 244650 WARNING nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.161 244650 DEBUG nova.virt.libvirt.volume.mount [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.167 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.192 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 04 10:33:37 compute-0 nova_compute[244644]: <domainCapabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <path>/usr/libexec/qemu-kvm</path>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <domain>kvm</domain>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <arch>i686</arch>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <vcpu max='4096'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <iothreads supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <os supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <enum name='firmware'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <loader supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>rom</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pflash</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='readonly'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>yes</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>no</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='secure'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>no</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </loader>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </os>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='host-passthrough' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='hostPassthroughMigratable'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>on</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>off</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='maximum' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='maximumMigratable'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>on</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>off</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='host-model' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <vendor>AMD</vendor>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='x2apic'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc-deadline'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='hypervisor'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc_adjust'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='spec-ctrl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='stibp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='cmp_legacy'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='overflow-recov'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='succor'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='amd-ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='virt-ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='lbrv'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc-scale'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='vmcb-clean'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='flushbyasid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='pause-filter'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='pfthreshold'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='svme-addr-chk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='disable' name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='custom' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Dhyana-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Genoa'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='auto-ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Genoa-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='auto-ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-128'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-256'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-512'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v6'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v7'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='KnightsMill'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4fmaps'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4vnniw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512er'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512pf'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='KnightsMill-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4fmaps'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4vnniw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512er'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512pf'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G4-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tbm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G5-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tbm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SierraForest'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ne-convert'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cmpccxadd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SierraForest-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ne-convert'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cmpccxadd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='athlon'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='athlon-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='core2duo'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='core2duo-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='coreduo'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='coreduo-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='n270'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='n270-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='phenom'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='phenom-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <memoryBacking supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <enum name='sourceType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>file</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>anonymous</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>memfd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </memoryBacking>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <devices>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <disk supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='diskDevice'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>disk</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>cdrom</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>floppy</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>lun</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='bus'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>fdc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>scsi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>sata</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-non-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </disk>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <graphics supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vnc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>egl-headless</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dbus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </graphics>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <video supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='modelType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vga</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>cirrus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>none</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>bochs</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ramfb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </video>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <hostdev supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='mode'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>subsystem</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='startupPolicy'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>default</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>mandatory</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>requisite</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>optional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='subsysType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pci</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>scsi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='capsType'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='pciBackend'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </hostdev>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <rng supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-non-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>random</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>egd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>builtin</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </rng>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <filesystem supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='driverType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>path</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>handle</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtiofs</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </filesystem>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <tpm supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tpm-tis</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tpm-crb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>emulator</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>external</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendVersion'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>2.0</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </tpm>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <redirdev supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='bus'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </redirdev>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <channel supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pty</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>unix</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </channel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <crypto supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>qemu</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>builtin</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </crypto>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <interface supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>default</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>passt</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </interface>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <panic supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>isa</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>hyperv</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </panic>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <console supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>null</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pty</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dev</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>file</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pipe</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>stdio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>udp</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tcp</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>unix</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>qemu-vdagent</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dbus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </console>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </devices>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <gic supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <vmcoreinfo supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <genid supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <backingStoreInput supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <backup supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <async-teardown supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <ps2 supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <sev supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <sgx supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <hyperv supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='features'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>relaxed</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vapic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>spinlocks</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vpindex</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>runtime</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>synic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>stimer</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>reset</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vendor_id</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>frequencies</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>reenlightenment</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tlbflush</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ipi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>avic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>emsr_bitmap</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>xmm_input</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <defaults>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <spinlocks>4095</spinlocks>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <stimer_direct>on</stimer_direct>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <tlbflush_direct>on</tlbflush_direct>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <tlbflush_extended>on</tlbflush_extended>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </defaults>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </hyperv>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <launchSecurity supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='sectype'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tdx</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </launchSecurity>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </features>
Dec 04 10:33:37 compute-0 nova_compute[244644]: </domainCapabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.199 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 04 10:33:37 compute-0 nova_compute[244644]: <domainCapabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <path>/usr/libexec/qemu-kvm</path>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <domain>kvm</domain>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <arch>i686</arch>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <vcpu max='240'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <iothreads supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <os supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <enum name='firmware'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <loader supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>rom</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pflash</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='readonly'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>yes</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>no</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='secure'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>no</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </loader>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </os>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='host-passthrough' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='hostPassthroughMigratable'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>on</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>off</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='maximum' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='maximumMigratable'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>on</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>off</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='host-model' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <vendor>AMD</vendor>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='x2apic'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc-deadline'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='hypervisor'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc_adjust'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='spec-ctrl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='stibp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='cmp_legacy'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='overflow-recov'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='succor'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='amd-ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='virt-ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='lbrv'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc-scale'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='vmcb-clean'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='flushbyasid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='pause-filter'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='pfthreshold'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='svme-addr-chk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='disable' name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='custom' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Dhyana-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Genoa'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='auto-ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Genoa-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='auto-ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-128'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-256'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-512'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v6'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v7'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='KnightsMill'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4fmaps'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4vnniw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512er'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512pf'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='KnightsMill-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4fmaps'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4vnniw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512er'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512pf'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G4-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tbm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G5-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tbm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SierraForest'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ne-convert'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cmpccxadd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SierraForest-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ne-convert'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cmpccxadd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='athlon'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='athlon-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='core2duo'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='core2duo-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='coreduo'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='coreduo-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='n270'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='n270-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='phenom'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='phenom-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <memoryBacking supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <enum name='sourceType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>file</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>anonymous</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>memfd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </memoryBacking>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <devices>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <disk supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='diskDevice'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>disk</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>cdrom</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>floppy</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>lun</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='bus'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ide</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>fdc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>scsi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>sata</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-non-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </disk>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <graphics supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vnc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>egl-headless</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dbus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </graphics>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <video supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='modelType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vga</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>cirrus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>none</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>bochs</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ramfb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </video>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <hostdev supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='mode'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>subsystem</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='startupPolicy'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>default</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>mandatory</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>requisite</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>optional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='subsysType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pci</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>scsi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='capsType'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='pciBackend'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </hostdev>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <rng supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-non-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>random</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>egd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>builtin</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </rng>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <filesystem supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='driverType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>path</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>handle</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtiofs</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </filesystem>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <tpm supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tpm-tis</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tpm-crb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>emulator</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>external</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendVersion'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>2.0</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </tpm>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <redirdev supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='bus'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </redirdev>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <channel supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pty</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>unix</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </channel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <crypto supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>qemu</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>builtin</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </crypto>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <interface supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>default</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>passt</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </interface>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <panic supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>isa</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>hyperv</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </panic>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <console supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>null</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pty</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dev</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>file</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pipe</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>stdio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>udp</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tcp</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>unix</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>qemu-vdagent</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dbus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </console>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </devices>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <gic supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <vmcoreinfo supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <genid supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <backingStoreInput supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <backup supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <async-teardown supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <ps2 supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <sev supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <sgx supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <hyperv supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='features'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>relaxed</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vapic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>spinlocks</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vpindex</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>runtime</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>synic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>stimer</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>reset</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vendor_id</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>frequencies</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>reenlightenment</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tlbflush</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ipi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>avic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>emsr_bitmap</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>xmm_input</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <defaults>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <spinlocks>4095</spinlocks>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <stimer_direct>on</stimer_direct>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <tlbflush_direct>on</tlbflush_direct>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <tlbflush_extended>on</tlbflush_extended>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </defaults>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </hyperv>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <launchSecurity supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='sectype'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tdx</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </launchSecurity>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </features>
Dec 04 10:33:37 compute-0 nova_compute[244644]: </domainCapabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.240 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.244 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 04 10:33:37 compute-0 nova_compute[244644]: <domainCapabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <path>/usr/libexec/qemu-kvm</path>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <domain>kvm</domain>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <arch>x86_64</arch>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <vcpu max='4096'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <iothreads supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <os supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <enum name='firmware'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>efi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <loader supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>rom</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pflash</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='readonly'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>yes</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>no</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='secure'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>yes</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>no</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </loader>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </os>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='host-passthrough' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='hostPassthroughMigratable'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>on</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>off</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='maximum' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='maximumMigratable'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>on</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>off</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='host-model' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <vendor>AMD</vendor>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='x2apic'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc-deadline'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='hypervisor'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc_adjust'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='spec-ctrl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='stibp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='cmp_legacy'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='overflow-recov'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='succor'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='amd-ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='virt-ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='lbrv'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc-scale'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='vmcb-clean'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='flushbyasid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='pause-filter'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='pfthreshold'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='svme-addr-chk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='disable' name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='custom' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Dhyana-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Genoa'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='auto-ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Genoa-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='auto-ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-128'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-256'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-512'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v6'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v7'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='KnightsMill'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4fmaps'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4vnniw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512er'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512pf'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='KnightsMill-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4fmaps'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4vnniw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512er'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512pf'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G4-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tbm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G5-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tbm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 podman[245308]: 2025-12-04 10:33:37.269543413 +0000 UTC m=+0.022452655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SierraForest'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ne-convert'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cmpccxadd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SierraForest-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ne-convert'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cmpccxadd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='athlon'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='athlon-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='core2duo'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='core2duo-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='coreduo'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='coreduo-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='n270'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='n270-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='phenom'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='phenom-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <memoryBacking supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <enum name='sourceType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>file</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>anonymous</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>memfd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </memoryBacking>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <devices>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <disk supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='diskDevice'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>disk</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>cdrom</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>floppy</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>lun</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='bus'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>fdc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>scsi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>sata</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-non-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </disk>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <graphics supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vnc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>egl-headless</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dbus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </graphics>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <video supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='modelType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vga</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>cirrus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>none</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>bochs</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ramfb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </video>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <hostdev supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='mode'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>subsystem</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='startupPolicy'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>default</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>mandatory</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>requisite</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>optional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='subsysType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pci</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>scsi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='capsType'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='pciBackend'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </hostdev>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <rng supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-non-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>random</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>egd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>builtin</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </rng>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <filesystem supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='driverType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>path</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>handle</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtiofs</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </filesystem>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <tpm supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tpm-tis</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tpm-crb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>emulator</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>external</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendVersion'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>2.0</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </tpm>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <redirdev supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='bus'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </redirdev>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <channel supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pty</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>unix</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </channel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <crypto supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>qemu</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>builtin</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </crypto>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <interface supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>default</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>passt</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </interface>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <panic supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>isa</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>hyperv</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </panic>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <console supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>null</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pty</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dev</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>file</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pipe</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>stdio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>udp</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tcp</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>unix</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>qemu-vdagent</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dbus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </console>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </devices>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <gic supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <vmcoreinfo supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <genid supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <backingStoreInput supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <backup supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <async-teardown supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <ps2 supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <sev supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <sgx supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <hyperv supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='features'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>relaxed</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vapic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>spinlocks</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vpindex</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>runtime</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>synic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>stimer</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>reset</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vendor_id</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>frequencies</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>reenlightenment</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tlbflush</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ipi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>avic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>emsr_bitmap</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>xmm_input</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <defaults>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <spinlocks>4095</spinlocks>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <stimer_direct>on</stimer_direct>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <tlbflush_direct>on</tlbflush_direct>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <tlbflush_extended>on</tlbflush_extended>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </defaults>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </hyperv>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <launchSecurity supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='sectype'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tdx</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </launchSecurity>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </features>
Dec 04 10:33:37 compute-0 nova_compute[244644]: </domainCapabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.313 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 04 10:33:37 compute-0 nova_compute[244644]: <domainCapabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <path>/usr/libexec/qemu-kvm</path>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <domain>kvm</domain>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <arch>x86_64</arch>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <vcpu max='240'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <iothreads supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <os supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <enum name='firmware'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <loader supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>rom</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pflash</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='readonly'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>yes</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>no</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='secure'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>no</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </loader>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </os>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='host-passthrough' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='hostPassthroughMigratable'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>on</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>off</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='maximum' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='maximumMigratable'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>on</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>off</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='host-model' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <vendor>AMD</vendor>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='x2apic'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc-deadline'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='hypervisor'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc_adjust'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='spec-ctrl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='stibp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='cmp_legacy'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='overflow-recov'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='succor'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='amd-ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='virt-ssbd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='lbrv'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='tsc-scale'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='vmcb-clean'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='flushbyasid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='pause-filter'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='pfthreshold'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='svme-addr-chk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <feature policy='disable' name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <mode name='custom' supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Broadwell-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cascadelake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Cooperlake-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Denverton-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Dhyana-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Genoa'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='auto-ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Genoa-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='auto-ibrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Milan-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amd-psfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='no-nested-data-bp'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='null-sel-clr-base'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='stibp-always-on'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-Rome-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='EPYC-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='GraniteRapids-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-128'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-256'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx10-512'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='prefetchiti'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 podman[245308]: 2025-12-04 10:33:37.42224982 +0000 UTC m=+0.175159042 container create 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Haswell-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-noTSX'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v6'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Icelake-Server-v7'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='IvyBridge-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='KnightsMill'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4fmaps'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4vnniw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512er'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512pf'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='KnightsMill-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4fmaps'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-4vnniw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512er'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512pf'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G4-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tbm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Opteron_G5-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fma4'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tbm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xop'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SapphireRapids-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='amx-tile'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-bf16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-fp16'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512-vpopcntdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bitalg'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vbmi2'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrc'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fzrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='la57'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='taa-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='tsx-ldtrk'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xfd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SierraForest'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ne-convert'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cmpccxadd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='SierraForest-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ifma'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-ne-convert'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx-vnni-int8'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='bus-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cmpccxadd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fbsdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='fsrs'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ibrs-all'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mcdt-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pbrsb-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='psdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='sbdr-ssdp-no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='serialize'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vaes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='vpclmulqdq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Client-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='hle'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='rtm'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Skylake-Server-v5'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512bw'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512cd'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512dq'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512f'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='avx512vl'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='invpcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pcid'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='pku'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='mpx'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v2'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v3'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='core-capability'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='split-lock-detect'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='Snowridge-v4'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='cldemote'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='erms'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='gfni'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdir64b'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='movdiri'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='xsaves'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='athlon'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='athlon-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='core2duo'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='core2duo-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='coreduo'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='coreduo-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='n270'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='n270-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='ss'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='phenom'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <blockers model='phenom-v1'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnow'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <feature name='3dnowext'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </blockers>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </mode>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </cpu>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <memoryBacking supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <enum name='sourceType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>file</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>anonymous</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <value>memfd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </memoryBacking>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <devices>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <disk supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='diskDevice'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>disk</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>cdrom</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>floppy</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>lun</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='bus'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ide</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>fdc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>scsi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>sata</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-non-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </disk>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <graphics supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vnc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>egl-headless</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dbus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </graphics>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <video supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='modelType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vga</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>cirrus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>none</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>bochs</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ramfb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </video>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <hostdev supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='mode'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>subsystem</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='startupPolicy'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>default</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>mandatory</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>requisite</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>optional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='subsysType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pci</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>scsi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='capsType'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='pciBackend'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </hostdev>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <rng supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtio-non-transitional</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>random</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>egd</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>builtin</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </rng>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <filesystem supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='driverType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>path</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>handle</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>virtiofs</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </filesystem>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <tpm supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tpm-tis</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tpm-crb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>emulator</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>external</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendVersion'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>2.0</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </tpm>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <redirdev supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='bus'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>usb</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </redirdev>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <channel supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pty</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>unix</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </channel>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <crypto supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>qemu</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendModel'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>builtin</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </crypto>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <interface supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='backendType'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>default</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>passt</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </interface>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <panic supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='model'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>isa</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>hyperv</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </panic>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <console supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='type'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>null</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vc</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pty</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dev</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>file</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>pipe</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>stdio</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>udp</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tcp</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>unix</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>qemu-vdagent</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>dbus</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </console>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </devices>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   <features>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <gic supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <vmcoreinfo supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <genid supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <backingStoreInput supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <backup supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <async-teardown supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <ps2 supported='yes'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <sev supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <sgx supported='no'/>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <hyperv supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='features'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>relaxed</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vapic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>spinlocks</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vpindex</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>runtime</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>synic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>stimer</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>reset</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>vendor_id</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>frequencies</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>reenlightenment</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tlbflush</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>ipi</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>avic</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>emsr_bitmap</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>xmm_input</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <defaults>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <spinlocks>4095</spinlocks>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <stimer_direct>on</stimer_direct>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <tlbflush_direct>on</tlbflush_direct>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <tlbflush_extended>on</tlbflush_extended>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </defaults>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </hyperv>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     <launchSecurity supported='yes'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       <enum name='sectype'>
Dec 04 10:33:37 compute-0 nova_compute[244644]:         <value>tdx</value>
Dec 04 10:33:37 compute-0 nova_compute[244644]:       </enum>
Dec 04 10:33:37 compute-0 nova_compute[244644]:     </launchSecurity>
Dec 04 10:33:37 compute-0 nova_compute[244644]:   </features>
Dec 04 10:33:37 compute-0 nova_compute[244644]: </domainCapabilities>
Dec 04 10:33:37 compute-0 nova_compute[244644]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.387 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.388 244650 INFO nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Secure Boot support detected
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.390 244650 INFO nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.390 244650 INFO nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.398 244650 DEBUG nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.447 244650 INFO nova.virt.node [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Determined node identity 39e18386-dcd4-4a7a-8441-091a9ba1f70f from /var/lib/nova/compute_id
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.466 244650 WARNING nova.compute.manager [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Compute nodes ['39e18386-dcd4-4a7a-8441-091a9ba1f70f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 04 10:33:37 compute-0 systemd[1]: Started libpod-conmon-4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a.scope.
Dec 04 10:33:37 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.528 244650 INFO nova.compute.manager [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 04 10:33:37 compute-0 podman[245308]: 2025-12-04 10:33:37.549924781 +0000 UTC m=+0.302834033 container init 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:33:37 compute-0 podman[245308]: 2025-12-04 10:33:37.558590922 +0000 UTC m=+0.311500144 container start 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:33:37 compute-0 laughing_bhabha[245325]: 167 167
Dec 04 10:33:37 compute-0 systemd[1]: libpod-4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a.scope: Deactivated successfully.
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.570 244650 WARNING nova.compute.manager [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.571 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.571 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.571 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.571 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:33:37 compute-0 nova_compute[244644]: 2025-12-04 10:33:37.572 244650 DEBUG oslo_concurrency.processutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:33:37 compute-0 podman[245308]: 2025-12-04 10:33:37.662807145 +0000 UTC m=+0.415716467 container attach 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:33:37 compute-0 podman[245308]: 2025-12-04 10:33:37.664514066 +0000 UTC m=+0.417423328 container died 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:33:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-74540bad621965b17a4be5e1748ad040bbf67021841ed96ff2b79c855e756489-merged.mount: Deactivated successfully.
Dec 04 10:33:37 compute-0 podman[245308]: 2025-12-04 10:33:37.832431291 +0000 UTC m=+0.585340513 container remove 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:33:37 compute-0 systemd[1]: libpod-conmon-4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a.scope: Deactivated successfully.
Dec 04 10:33:38 compute-0 podman[245368]: 2025-12-04 10:33:38.004652891 +0000 UTC m=+0.046834094 container create 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:33:38 compute-0 systemd[1]: Started libpod-conmon-2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6.scope.
Dec 04 10:33:38 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:33:38 compute-0 podman[245368]: 2025-12-04 10:33:38.07853089 +0000 UTC m=+0.120712113 container init 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:33:38 compute-0 podman[245368]: 2025-12-04 10:33:37.984087564 +0000 UTC m=+0.026268777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:33:38 compute-0 podman[245368]: 2025-12-04 10:33:38.086448802 +0000 UTC m=+0.128630015 container start 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:33:38 compute-0 podman[245368]: 2025-12-04 10:33:38.089600488 +0000 UTC m=+0.131781701 container attach 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:33:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:33:38 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3277150800' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:33:38 compute-0 nova_compute[244644]: 2025-12-04 10:33:38.112 244650 DEBUG oslo_concurrency.processutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:33:38 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 04 10:33:38 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 04 10:33:38 compute-0 ceph-mon[75358]: pgmap v683: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:38 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3277150800' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:33:38 compute-0 nova_compute[244644]: 2025-12-04 10:33:38.429 244650 WARNING nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:33:38 compute-0 nova_compute[244644]: 2025-12-04 10:33:38.431 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5115MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:33:38 compute-0 nova_compute[244644]: 2025-12-04 10:33:38.431 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:33:38 compute-0 nova_compute[244644]: 2025-12-04 10:33:38.432 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:33:38 compute-0 nova_compute[244644]: 2025-12-04 10:33:38.453 244650 WARNING nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] No compute node record for compute-0.ctlplane.example.com:39e18386-dcd4-4a7a-8441-091a9ba1f70f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 39e18386-dcd4-4a7a-8441-091a9ba1f70f could not be found.
Dec 04 10:33:38 compute-0 nova_compute[244644]: 2025-12-04 10:33:38.475 244650 INFO nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 39e18386-dcd4-4a7a-8441-091a9ba1f70f
Dec 04 10:33:38 compute-0 nova_compute[244644]: 2025-12-04 10:33:38.552 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:33:38 compute-0 nova_compute[244644]: 2025-12-04 10:33:38.553 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:33:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.623984) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418624031, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1330, "num_deletes": 505, "total_data_size": 1639607, "memory_usage": 1669840, "flush_reason": "Manual Compaction"}
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418642472, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1624331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13538, "largest_seqno": 14867, "table_properties": {"data_size": 1618432, "index_size": 2783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14756, "raw_average_key_size": 18, "raw_value_size": 1604682, "raw_average_value_size": 1959, "num_data_blocks": 127, "num_entries": 819, "num_filter_entries": 819, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844309, "oldest_key_time": 1764844309, "file_creation_time": 1764844418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 18543 microseconds, and 4857 cpu microseconds.
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.642527) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1624331 bytes OK
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.642559) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.644502) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.644524) EVENT_LOG_v1 {"time_micros": 1764844418644518, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.644547) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1632576, prev total WAL file size 1632576, number of live WAL files 2.
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.645288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1586KB)], [32(7444KB)]
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418645330, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9247553, "oldest_snapshot_seqno": -1}
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3837 keys, 7307396 bytes, temperature: kUnknown
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418700120, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7307396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7280129, "index_size": 16597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 94119, "raw_average_key_size": 24, "raw_value_size": 7208980, "raw_average_value_size": 1878, "num_data_blocks": 703, "num_entries": 3837, "num_filter_entries": 3837, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.700427) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7307396 bytes
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.702372) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.4 rd, 133.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.3 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 4860, records dropped: 1023 output_compression: NoCompression
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.702393) EVENT_LOG_v1 {"time_micros": 1764844418702383, "job": 14, "event": "compaction_finished", "compaction_time_micros": 54923, "compaction_time_cpu_micros": 18861, "output_level": 6, "num_output_files": 1, "total_output_size": 7307396, "num_input_records": 4860, "num_output_records": 3837, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418702960, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418704536, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.645193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:33:38 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:33:38 compute-0 lvm[245488]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:33:38 compute-0 lvm[245489]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:33:38 compute-0 lvm[245489]: VG ceph_vg1 finished
Dec 04 10:33:38 compute-0 lvm[245488]: VG ceph_vg0 finished
Dec 04 10:33:38 compute-0 lvm[245491]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:33:38 compute-0 lvm[245491]: VG ceph_vg2 finished
Dec 04 10:33:38 compute-0 admiring_nightingale[245385]: {}
Dec 04 10:33:39 compute-0 systemd[1]: libpod-2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6.scope: Deactivated successfully.
Dec 04 10:33:39 compute-0 systemd[1]: libpod-2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6.scope: Consumed 1.412s CPU time.
Dec 04 10:33:39 compute-0 podman[245368]: 2025-12-04 10:33:39.009324226 +0000 UTC m=+1.051505439 container died 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68-merged.mount: Deactivated successfully.
Dec 04 10:33:39 compute-0 podman[245368]: 2025-12-04 10:33:39.060497705 +0000 UTC m=+1.102678918 container remove 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:33:39 compute-0 systemd[1]: libpod-conmon-2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6.scope: Deactivated successfully.
Dec 04 10:33:39 compute-0 sudo[245248]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:33:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:33:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:33:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:33:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:39 compute-0 sudo[245506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:33:39 compute-0 sudo[245506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:33:39 compute-0 sudo[245506]: pam_unix(sudo:session): session closed for user root
Dec 04 10:33:39 compute-0 nova_compute[244644]: 2025-12-04 10:33:39.437 244650 INFO nova.scheduler.client.report [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] [req-a5c4e75f-5bd8-407c-9f38-fa4768b53063] Created resource provider record via placement API for resource provider with UUID 39e18386-dcd4-4a7a-8441-091a9ba1f70f and name compute-0.ctlplane.example.com.
Dec 04 10:33:39 compute-0 nova_compute[244644]: 2025-12-04 10:33:39.861 244650 DEBUG oslo_concurrency.processutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:33:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:33:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:33:40 compute-0 ceph-mon[75358]: pgmap v684: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:33:40 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529618897' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.404 244650 DEBUG oslo_concurrency.processutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.412 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 04 10:33:40 compute-0 nova_compute[244644]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.413 244650 INFO nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] kernel doesn't support AMD SEV
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.414 244650 DEBUG nova.compute.provider_tree [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.414 244650 DEBUG nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.485 244650 DEBUG nova.scheduler.client.report [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updated inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.486 244650 DEBUG nova.compute.provider_tree [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updating resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.486 244650 DEBUG nova.compute.provider_tree [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.608 244650 DEBUG nova.compute.provider_tree [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updating resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.637 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.637 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.638 244650 DEBUG nova.service [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.742 244650 DEBUG nova.service [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 04 10:33:40 compute-0 nova_compute[244644]: 2025-12-04 10:33:40.742 244650 DEBUG nova.servicegroup.drivers.db [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 04 10:33:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:41 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2529618897' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:33:42 compute-0 ceph-mon[75358]: pgmap v685: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:44 compute-0 ceph-mon[75358]: pgmap v686: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:46 compute-0 ceph-mon[75358]: pgmap v687: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:48 compute-0 ceph-mon[75358]: pgmap v688: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:48 compute-0 podman[245553]: 2025-12-04 10:33:48.969907828 +0000 UTC m=+0.072485076 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:33:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:49 compute-0 ceph-mon[75358]: pgmap v689: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:50 compute-0 sshd-session[245573]: Invalid user ubuntu from 103.179.218.243 port 42438
Dec 04 10:33:50 compute-0 sshd-session[245573]: Received disconnect from 103.179.218.243 port 42438:11: Bye Bye [preauth]
Dec 04 10:33:50 compute-0 sshd-session[245573]: Disconnected from invalid user ubuntu 103.179.218.243 port 42438 [preauth]
Dec 04 10:33:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:52 compute-0 ceph-mon[75358]: pgmap v690: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:54 compute-0 ceph-mon[75358]: pgmap v691: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:33:54.899 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:33:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:33:54.901 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:33:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:33:54.901 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:33:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:56 compute-0 ceph-mon[75358]: pgmap v692: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:57 compute-0 sshd-session[245575]: Invalid user guest from 217.154.62.22 port 39108
Dec 04 10:33:57 compute-0 sshd-session[245575]: Received disconnect from 217.154.62.22 port 39108:11: Bye Bye [preauth]
Dec 04 10:33:57 compute-0 sshd-session[245575]: Disconnected from invalid user guest 217.154.62.22 port 39108 [preauth]
Dec 04 10:33:57 compute-0 podman[245577]: 2025-12-04 10:33:57.907159315 +0000 UTC m=+0.138211858 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:33:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:33:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:33:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:33:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:33:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:33:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:33:58 compute-0 ceph-mon[75358]: pgmap v693: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:33:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:33:58 compute-0 podman[245603]: 2025-12-04 10:33:58.939674473 +0000 UTC m=+0.043790781 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 04 10:33:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:00 compute-0 ceph-mon[75358]: pgmap v694: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:02 compute-0 ceph-mon[75358]: pgmap v695: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:34:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/234979890' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:34:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:34:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/234979890' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:34:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:34:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2673190191' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:34:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:34:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2673190191' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:34:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/234979890' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:34:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/234979890' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:34:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2673190191' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:34:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2673190191' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:34:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:34:03 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2007810801' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:34:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:34:03 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2007810801' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:34:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:04 compute-0 ceph-mon[75358]: pgmap v696: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2007810801' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:34:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2007810801' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:34:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:05 compute-0 ceph-mon[75358]: pgmap v697: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:34:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3330 writes, 14K keys, 3330 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3330 writes, 3330 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1282 writes, 5825 keys, 1282 commit groups, 1.0 writes per commit group, ingest: 8.61 MB, 0.01 MB/s
                                           Interval WAL: 1282 writes, 1282 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     99.0      0.16              0.04         7    0.023       0      0       0.0       0.0
                                             L6      1/0    6.97 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    127.1    104.6      0.40              0.11         6    0.066     24K   3207       0.0       0.0
                                            Sum      1/0    6.97 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6     90.5    103.0      0.56              0.15        13    0.043     24K   3207       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    120.0    121.4      0.29              0.09         8    0.036     17K   2470       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    127.1    104.6      0.40              0.11         6    0.066     24K   3207       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    101.1      0.16              0.04         6    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.6 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56349f89b8d0#2 capacity: 308.00 MB usage: 1.94 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000123 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(107,1.72 MB,0.557154%) FilterBlock(14,75.86 KB,0.0240524%) IndexBlock(14,149.05 KB,0.0472577%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 04 10:34:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:08 compute-0 ceph-mon[75358]: pgmap v698: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:10 compute-0 ceph-mon[75358]: pgmap v699: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:11 compute-0 ceph-mon[75358]: pgmap v700: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:14 compute-0 ceph-mon[75358]: pgmap v701: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:16 compute-0 ceph-mon[75358]: pgmap v702: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:17 compute-0 ceph-mon[75358]: pgmap v703: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:19 compute-0 ceph-mon[75358]: pgmap v704: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:19 compute-0 podman[245622]: 2025-12-04 10:34:19.972190979 +0000 UTC m=+0.072514097 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 04 10:34:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:22 compute-0 ceph-mon[75358]: pgmap v705: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:24 compute-0 ceph-mon[75358]: pgmap v706: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:25 compute-0 ceph-mon[75358]: pgmap v707: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:34:26
Dec 04 10:34:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:34:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:34:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups']
Dec 04 10:34:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:34:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:34:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:34:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:34:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:34:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:34:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:34:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:34:28 compute-0 ceph-mon[75358]: pgmap v708: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:28 compute-0 podman[245642]: 2025-12-04 10:34:28.973469603 +0000 UTC m=+0.073839129 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 04 10:34:29 compute-0 podman[245669]: 2025-12-04 10:34:29.061007462 +0000 UTC m=+0.059177613 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 04 10:34:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:30 compute-0 ceph-mon[75358]: pgmap v709: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:32 compute-0 ceph-mon[75358]: pgmap v710: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:33 compute-0 nova_compute[244644]: 2025-12-04 10:34:33.744 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:33 compute-0 nova_compute[244644]: 2025-12-04 10:34:33.897 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:34 compute-0 ceph-mon[75358]: pgmap v711: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:35 compute-0 ceph-mon[75358]: pgmap v712: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:35 compute-0 sshd-session[245688]: Invalid user ionadmin from 74.249.218.27 port 55230
Dec 04 10:34:35 compute-0 sshd-session[245688]: Received disconnect from 74.249.218.27 port 55230:11: Bye Bye [preauth]
Dec 04 10:34:35 compute-0 sshd-session[245688]: Disconnected from invalid user ionadmin 74.249.218.27 port 55230 [preauth]
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.340 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.341 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.341 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.341 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.363 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.363 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.365 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.365 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.423 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.424 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.424 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.424 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.425 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:34:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:34:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945567612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:34:36 compute-0 nova_compute[244644]: 2025-12-04 10:34:36.979 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:34:37 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3945567612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.134 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.136 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5148MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.136 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.136 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:34:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.230 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.230 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.254 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:34:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:34:37 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927827360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.755 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.762 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.779 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.822 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:34:37 compute-0 nova_compute[244644]: 2025-12-04 10:34:37.823 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:34:38 compute-0 ceph-mon[75358]: pgmap v713: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:38 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2927827360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:34:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:39 compute-0 sudo[245734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:34:39 compute-0 sudo[245734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:34:39 compute-0 sudo[245734]: pam_unix(sudo:session): session closed for user root
Dec 04 10:34:39 compute-0 ceph-mon[75358]: pgmap v714: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:39 compute-0 sudo[245759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:34:39 compute-0 sudo[245759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:34:39 compute-0 sudo[245759]: pam_unix(sudo:session): session closed for user root
Dec 04 10:34:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:34:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:34:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:34:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:34:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:34:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:34:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:34:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:34:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:34:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:34:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:34:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:34:39 compute-0 sudo[245815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:34:39 compute-0 sudo[245815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:34:39 compute-0 sudo[245815]: pam_unix(sudo:session): session closed for user root
Dec 04 10:34:40 compute-0 sudo[245840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:34:40 compute-0 sudo[245840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:34:40 compute-0 podman[245876]: 2025-12-04 10:34:40.322584476 +0000 UTC m=+0.088365719 container create e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:34:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:34:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:34:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:34:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:34:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:34:40 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:34:40 compute-0 podman[245876]: 2025-12-04 10:34:40.257065636 +0000 UTC m=+0.022846899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:34:40 compute-0 systemd[1]: Started libpod-conmon-e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c.scope.
Dec 04 10:34:40 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:34:40 compute-0 podman[245876]: 2025-12-04 10:34:40.44282606 +0000 UTC m=+0.208607303 container init e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:34:40 compute-0 podman[245876]: 2025-12-04 10:34:40.450644141 +0000 UTC m=+0.216425384 container start e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:34:40 compute-0 podman[245876]: 2025-12-04 10:34:40.453961922 +0000 UTC m=+0.219743185 container attach e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 04 10:34:40 compute-0 friendly_boyd[245892]: 167 167
Dec 04 10:34:40 compute-0 systemd[1]: libpod-e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c.scope: Deactivated successfully.
Dec 04 10:34:40 compute-0 podman[245876]: 2025-12-04 10:34:40.456313739 +0000 UTC m=+0.222094982 container died e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4676ef9522015c91dc60851711e8f736a71ce5ce6c9e3d206caf238df124b4d-merged.mount: Deactivated successfully.
Dec 04 10:34:40 compute-0 podman[245876]: 2025-12-04 10:34:40.642516945 +0000 UTC m=+0.408298188 container remove e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:34:40 compute-0 systemd[1]: libpod-conmon-e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c.scope: Deactivated successfully.
Dec 04 10:34:40 compute-0 podman[245914]: 2025-12-04 10:34:40.819600918 +0000 UTC m=+0.062838775 container create 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 04 10:34:40 compute-0 systemd[1]: Started libpod-conmon-0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae.scope.
Dec 04 10:34:40 compute-0 podman[245914]: 2025-12-04 10:34:40.777728835 +0000 UTC m=+0.020966722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:34:40 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:40 compute-0 podman[245914]: 2025-12-04 10:34:40.893354467 +0000 UTC m=+0.136592354 container init 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:34:40 compute-0 podman[245914]: 2025-12-04 10:34:40.903335911 +0000 UTC m=+0.146573788 container start 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:34:40 compute-0 podman[245914]: 2025-12-04 10:34:40.919209319 +0000 UTC m=+0.162447206 container attach 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:34:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:41 compute-0 ceph-mon[75358]: pgmap v715: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:41 compute-0 clever_villani[245931]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:34:41 compute-0 clever_villani[245931]: --> All data devices are unavailable
Dec 04 10:34:41 compute-0 systemd[1]: libpod-0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae.scope: Deactivated successfully.
Dec 04 10:34:41 compute-0 podman[245914]: 2025-12-04 10:34:41.48622721 +0000 UTC m=+0.729465087 container died 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c-merged.mount: Deactivated successfully.
Dec 04 10:34:41 compute-0 podman[245914]: 2025-12-04 10:34:41.571387539 +0000 UTC m=+0.814625396 container remove 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:34:41 compute-0 systemd[1]: libpod-conmon-0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae.scope: Deactivated successfully.
Dec 04 10:34:41 compute-0 sudo[245840]: pam_unix(sudo:session): session closed for user root
Dec 04 10:34:41 compute-0 sudo[245965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:34:41 compute-0 sudo[245965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:34:41 compute-0 sudo[245965]: pam_unix(sudo:session): session closed for user root
Dec 04 10:34:41 compute-0 sudo[245990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:34:41 compute-0 sudo[245990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:34:42 compute-0 podman[246028]: 2025-12-04 10:34:42.012125946 +0000 UTC m=+0.037386963 container create 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:34:42 compute-0 podman[246028]: 2025-12-04 10:34:41.996397722 +0000 UTC m=+0.021658769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:34:42 compute-0 systemd[1]: Started libpod-conmon-92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563.scope.
Dec 04 10:34:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:34:42 compute-0 podman[246028]: 2025-12-04 10:34:42.155594518 +0000 UTC m=+0.180855555 container init 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:34:42 compute-0 podman[246028]: 2025-12-04 10:34:42.163211414 +0000 UTC m=+0.188472431 container start 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:34:42 compute-0 podman[246028]: 2025-12-04 10:34:42.166640188 +0000 UTC m=+0.191901315 container attach 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:34:42 compute-0 heuristic_keller[246044]: 167 167
Dec 04 10:34:42 compute-0 systemd[1]: libpod-92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563.scope: Deactivated successfully.
Dec 04 10:34:42 compute-0 podman[246028]: 2025-12-04 10:34:42.170062781 +0000 UTC m=+0.195323808 container died 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-81c1430130fb0d1731963a1059f1fe03fae10a323c0ede3d184c596742943de6-merged.mount: Deactivated successfully.
Dec 04 10:34:42 compute-0 podman[246028]: 2025-12-04 10:34:42.315481371 +0000 UTC m=+0.340742388 container remove 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:34:42 compute-0 systemd[1]: libpod-conmon-92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563.scope: Deactivated successfully.
Dec 04 10:34:42 compute-0 podman[246067]: 2025-12-04 10:34:42.542897902 +0000 UTC m=+0.111307798 container create 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:34:42 compute-0 podman[246067]: 2025-12-04 10:34:42.455576511 +0000 UTC m=+0.023986457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:34:42 compute-0 systemd[1]: Started libpod-conmon-520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542.scope.
Dec 04 10:34:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:42 compute-0 podman[246067]: 2025-12-04 10:34:42.645388844 +0000 UTC m=+0.213798790 container init 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec 04 10:34:42 compute-0 podman[246067]: 2025-12-04 10:34:42.652658551 +0000 UTC m=+0.221068447 container start 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:34:42 compute-0 podman[246067]: 2025-12-04 10:34:42.679314972 +0000 UTC m=+0.247724888 container attach 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:34:42 compute-0 magical_meninsky[246084]: {
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:     "0": [
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:         {
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "devices": [
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "/dev/loop3"
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             ],
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_name": "ceph_lv0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_size": "21470642176",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "name": "ceph_lv0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "tags": {
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.cluster_name": "ceph",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.crush_device_class": "",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.encrypted": "0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.objectstore": "bluestore",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.osd_id": "0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.type": "block",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.vdo": "0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.with_tpm": "0"
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             },
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "type": "block",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "vg_name": "ceph_vg0"
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:         }
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:     ],
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:     "1": [
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:         {
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "devices": [
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "/dev/loop4"
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             ],
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_name": "ceph_lv1",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_size": "21470642176",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "name": "ceph_lv1",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "tags": {
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.cluster_name": "ceph",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.crush_device_class": "",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.encrypted": "0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.objectstore": "bluestore",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.osd_id": "1",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.type": "block",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.vdo": "0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.with_tpm": "0"
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             },
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "type": "block",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "vg_name": "ceph_vg1"
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:         }
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:     ],
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:     "2": [
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:         {
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "devices": [
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "/dev/loop5"
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             ],
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_name": "ceph_lv2",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_size": "21470642176",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "name": "ceph_lv2",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "tags": {
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.cluster_name": "ceph",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.crush_device_class": "",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.encrypted": "0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.objectstore": "bluestore",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.osd_id": "2",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.type": "block",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.vdo": "0",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:                 "ceph.with_tpm": "0"
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             },
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "type": "block",
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:             "vg_name": "ceph_vg2"
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:         }
Dec 04 10:34:42 compute-0 magical_meninsky[246084]:     ]
Dec 04 10:34:42 compute-0 magical_meninsky[246084]: }
Dec 04 10:34:42 compute-0 systemd[1]: libpod-520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542.scope: Deactivated successfully.
Dec 04 10:34:42 compute-0 podman[246093]: 2025-12-04 10:34:42.983834074 +0000 UTC m=+0.023236507 container died 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4-merged.mount: Deactivated successfully.
Dec 04 10:34:43 compute-0 podman[246093]: 2025-12-04 10:34:43.171768213 +0000 UTC m=+0.211170636 container remove 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:34:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:43 compute-0 systemd[1]: libpod-conmon-520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542.scope: Deactivated successfully.
Dec 04 10:34:43 compute-0 sudo[245990]: pam_unix(sudo:session): session closed for user root
Dec 04 10:34:43 compute-0 sudo[246109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:34:43 compute-0 sudo[246109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:34:43 compute-0 sudo[246109]: pam_unix(sudo:session): session closed for user root
Dec 04 10:34:43 compute-0 sudo[246134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:34:43 compute-0 sudo[246134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:34:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:43 compute-0 podman[246171]: 2025-12-04 10:34:43.732309375 +0000 UTC m=+0.055936367 container create 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:34:43 compute-0 systemd[1]: Started libpod-conmon-78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f.scope.
Dec 04 10:34:43 compute-0 podman[246171]: 2025-12-04 10:34:43.706278619 +0000 UTC m=+0.029905651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:34:43 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:34:43 compute-0 podman[246171]: 2025-12-04 10:34:43.817881494 +0000 UTC m=+0.141508486 container init 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:34:43 compute-0 podman[246171]: 2025-12-04 10:34:43.824061894 +0000 UTC m=+0.147688876 container start 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:34:43 compute-0 podman[246171]: 2025-12-04 10:34:43.827811096 +0000 UTC m=+0.151438098 container attach 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:34:43 compute-0 systemd[1]: libpod-78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f.scope: Deactivated successfully.
Dec 04 10:34:43 compute-0 flamboyant_cohen[246187]: 167 167
Dec 04 10:34:43 compute-0 conmon[246187]: conmon 78ac4482360f3a6aa692 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f.scope/container/memory.events
Dec 04 10:34:43 compute-0 podman[246171]: 2025-12-04 10:34:43.83041806 +0000 UTC m=+0.154045042 container died 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f4f3d71c672d555358757da4660ad9f389078d9ab4d5f7f0a9a8355b9e940b6-merged.mount: Deactivated successfully.
Dec 04 10:34:43 compute-0 podman[246171]: 2025-12-04 10:34:43.865874975 +0000 UTC m=+0.189501947 container remove 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:34:43 compute-0 systemd[1]: libpod-conmon-78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f.scope: Deactivated successfully.
Dec 04 10:34:44 compute-0 podman[246211]: 2025-12-04 10:34:44.024169939 +0000 UTC m=+0.042592721 container create 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:34:44 compute-0 systemd[1]: Started libpod-conmon-8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b.scope.
Dec 04 10:34:44 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:34:44 compute-0 podman[246211]: 2025-12-04 10:34:44.005978835 +0000 UTC m=+0.024401647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:34:44 compute-0 podman[246211]: 2025-12-04 10:34:44.101744633 +0000 UTC m=+0.120167435 container init 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:34:44 compute-0 podman[246211]: 2025-12-04 10:34:44.110352603 +0000 UTC m=+0.128775385 container start 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:34:44 compute-0 podman[246211]: 2025-12-04 10:34:44.114628837 +0000 UTC m=+0.133051649 container attach 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:34:44 compute-0 ceph-mon[75358]: pgmap v716: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 04 10:34:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4092581525' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 04 10:34:44 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 04 10:34:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 04 10:34:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 04 10:34:44 compute-0 lvm[246306]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:34:44 compute-0 lvm[246307]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:34:44 compute-0 lvm[246307]: VG ceph_vg1 finished
Dec 04 10:34:44 compute-0 lvm[246306]: VG ceph_vg0 finished
Dec 04 10:34:44 compute-0 lvm[246309]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:34:44 compute-0 lvm[246309]: VG ceph_vg2 finished
Dec 04 10:34:44 compute-0 eager_sutherland[246228]: {}
Dec 04 10:34:45 compute-0 systemd[1]: libpod-8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b.scope: Deactivated successfully.
Dec 04 10:34:45 compute-0 systemd[1]: libpod-8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b.scope: Consumed 1.447s CPU time.
Dec 04 10:34:45 compute-0 podman[246211]: 2025-12-04 10:34:45.024936227 +0000 UTC m=+1.043359029 container died 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2-merged.mount: Deactivated successfully.
Dec 04 10:34:45 compute-0 podman[246211]: 2025-12-04 10:34:45.074303611 +0000 UTC m=+1.092726393 container remove 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:34:45 compute-0 systemd[1]: libpod-conmon-8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b.scope: Deactivated successfully.
Dec 04 10:34:45 compute-0 sudo[246134]: pam_unix(sudo:session): session closed for user root
Dec 04 10:34:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:34:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:34:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:34:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:34:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:45 compute-0 sudo[246326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:34:45 compute-0 sudo[246326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:34:45 compute-0 sudo[246326]: pam_unix(sudo:session): session closed for user root
Dec 04 10:34:45 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/4092581525' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 04 10:34:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:34:45 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:34:46 compute-0 ceph-mon[75358]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 04 10:34:46 compute-0 ceph-mon[75358]: pgmap v717: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:48 compute-0 ceph-mon[75358]: pgmap v718: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:50 compute-0 ceph-mon[75358]: pgmap v719: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:50 compute-0 ceph-osd[88205]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.000054s
Dec 04 10:34:50 compute-0 ceph-osd[86021]: bluestore.MempoolThread fragmentation_score=0.000116 took=0.000017s
Dec 04 10:34:50 compute-0 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000037s
Dec 04 10:34:50 compute-0 podman[246351]: 2025-12-04 10:34:50.995243287 +0000 UTC m=+0.093991255 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:34:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:52 compute-0 ceph-mon[75358]: pgmap v720: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:54 compute-0 ceph-mon[75358]: pgmap v721: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:54 compute-0 sshd-session[246371]: Invalid user deploy from 103.149.86.230 port 50570
Dec 04 10:34:54 compute-0 sshd-session[246371]: Received disconnect from 103.149.86.230 port 50570:11: Bye Bye [preauth]
Dec 04 10:34:54 compute-0 sshd-session[246371]: Disconnected from invalid user deploy 103.149.86.230 port 50570 [preauth]
Dec 04 10:34:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:34:54.900 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:34:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:34:54.901 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:34:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:34:54.901 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:34:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:56 compute-0 ceph-mon[75358]: pgmap v722: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:34:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:34:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:34:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:34:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:34:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:34:58 compute-0 ceph-mon[75358]: pgmap v723: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:34:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:59 compute-0 ceph-mon[75358]: pgmap v724: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:34:59 compute-0 podman[246374]: 2025-12-04 10:34:59.945950278 +0000 UTC m=+0.051998080 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 04 10:35:00 compute-0 podman[246373]: 2025-12-04 10:35:00.02550507 +0000 UTC m=+0.133335306 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 04 10:35:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:02 compute-0 ceph-mon[75358]: pgmap v725: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:03 compute-0 ceph-mon[75358]: pgmap v726: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:05 compute-0 sshd-session[246418]: Invalid user radio from 107.175.213.239 port 37564
Dec 04 10:35:05 compute-0 sshd-session[246418]: Received disconnect from 107.175.213.239 port 37564:11: Bye Bye [preauth]
Dec 04 10:35:05 compute-0 sshd-session[246418]: Disconnected from invalid user radio 107.175.213.239 port 37564 [preauth]
Dec 04 10:35:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:05 compute-0 ceph-mon[75358]: pgmap v727: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 04 10:35:08 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 04 10:35:08 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 04 10:35:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 04 10:35:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 04 10:35:08 compute-0 ceph-mon[75358]: pgmap v728: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 04 10:35:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:09 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 04 10:35:09 compute-0 ceph-mon[75358]: pgmap v729: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:35:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045903362' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:35:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:35:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045903362' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:35:11 compute-0 ceph-mon[75358]: pgmap v730: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3045903362' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:35:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3045903362' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:35:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:14 compute-0 ceph-mon[75358]: pgmap v731: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:15 compute-0 sshd-session[246416]: Invalid user monitoring from 101.47.163.20 port 57654
Dec 04 10:35:15 compute-0 sshd-session[246416]: Received disconnect from 101.47.163.20 port 57654:11: Bye Bye [preauth]
Dec 04 10:35:15 compute-0 sshd-session[246416]: Disconnected from invalid user monitoring 101.47.163.20 port 57654 [preauth]
Dec 04 10:35:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:15 compute-0 ceph-mon[75358]: pgmap v732: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:17 compute-0 ceph-mon[75358]: pgmap v733: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:19 compute-0 ceph-mon[75358]: pgmap v734: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:21 compute-0 podman[246420]: 2025-12-04 10:35:21.957621348 +0000 UTC m=+0.058056118 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec 04 10:35:22 compute-0 ceph-mon[75358]: pgmap v735: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:35:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5692 writes, 24K keys, 5692 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5692 writes, 915 syncs, 6.22 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:35:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:23 compute-0 ceph-mon[75358]: pgmap v736: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:25 compute-0 ceph-mon[75358]: pgmap v737: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:35:26
Dec 04 10:35:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:35:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:35:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'vms']
Dec 04 10:35:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:35:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:27 compute-0 sshd-session[246440]: Invalid user postgres from 103.179.218.243 port 42544
Dec 04 10:35:27 compute-0 ceph-mon[75358]: pgmap v738: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:27 compute-0 sshd-session[246440]: Received disconnect from 103.179.218.243 port 42544:11: Bye Bye [preauth]
Dec 04 10:35:27 compute-0 sshd-session[246440]: Disconnected from invalid user postgres 103.179.218.243 port 42544 [preauth]
Dec 04 10:35:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:35:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:35:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:35:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:35:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:35:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:35:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:35:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:35:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Cumulative writes: 7142 writes, 28K keys, 7142 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7142 writes, 1395 syncs, 5.12 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:35:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:29 compute-0 sshd-session[246442]: Invalid user frontend from 217.154.62.22 port 59520
Dec 04 10:35:30 compute-0 podman[246444]: 2025-12-04 10:35:30.035899993 +0000 UTC m=+0.050724520 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:35:30 compute-0 sshd-session[246442]: Received disconnect from 217.154.62.22 port 59520:11: Bye Bye [preauth]
Dec 04 10:35:30 compute-0 sshd-session[246442]: Disconnected from invalid user frontend 217.154.62.22 port 59520 [preauth]
Dec 04 10:35:30 compute-0 podman[246463]: 2025-12-04 10:35:30.145027606 +0000 UTC m=+0.082440223 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:35:30 compute-0 ceph-mon[75358]: pgmap v739: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:32 compute-0 ceph-mon[75358]: pgmap v740: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:34 compute-0 ceph-mon[75358]: pgmap v741: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:35 compute-0 ceph-mon[75358]: pgmap v742: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:35:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:35:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5703 writes, 24K keys, 5703 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5703 writes, 902 syncs, 6.32 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.816 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.817 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.844 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.844 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.844 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.857 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.859 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.894 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.895 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.895 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.895 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:35:37 compute-0 nova_compute[244644]: 2025-12-04 10:35:37.896 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:35:38 compute-0 ceph-mon[75358]: pgmap v743: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:35:38 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/989021705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:35:38 compute-0 nova_compute[244644]: 2025-12-04 10:35:38.519 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:35:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:38 compute-0 nova_compute[244644]: 2025-12-04 10:35:38.708 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:35:38 compute-0 nova_compute[244644]: 2025-12-04 10:35:38.710 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:35:38 compute-0 nova_compute[244644]: 2025-12-04 10:35:38.710 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:35:38 compute-0 nova_compute[244644]: 2025-12-04 10:35:38.711 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:35:38 compute-0 nova_compute[244644]: 2025-12-04 10:35:38.798 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:35:38 compute-0 nova_compute[244644]: 2025-12-04 10:35:38.799 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:35:38 compute-0 nova_compute[244644]: 2025-12-04 10:35:38.815 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:35:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:39 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/989021705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:35:39 compute-0 ceph-mon[75358]: pgmap v744: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:35:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364954019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:35:39 compute-0 nova_compute[244644]: 2025-12-04 10:35:39.397 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:35:39 compute-0 nova_compute[244644]: 2025-12-04 10:35:39.403 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:35:39 compute-0 nova_compute[244644]: 2025-12-04 10:35:39.431 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:35:39 compute-0 nova_compute[244644]: 2025-12-04 10:35:39.433 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:35:39 compute-0 nova_compute[244644]: 2025-12-04 10:35:39.433 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:35:39 compute-0 nova_compute[244644]: 2025-12-04 10:35:39.912 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:35:39 compute-0 nova_compute[244644]: 2025-12-04 10:35:39.913 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:35:40 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2364954019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:35:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:41 compute-0 ceph-mon[75358]: pgmap v745: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:43 compute-0 ceph-mgr[75651]: [devicehealth INFO root] Check health
Dec 04 10:35:44 compute-0 sshd-session[246534]: Invalid user admin from 41.59.200.166 port 40775
Dec 04 10:35:44 compute-0 ceph-mon[75358]: pgmap v746: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:44 compute-0 sshd-session[246534]: Connection closed by invalid user admin 41.59.200.166 port 40775 [preauth]
Dec 04 10:35:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:45 compute-0 sudo[246536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:35:45 compute-0 sudo[246536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:35:45 compute-0 sudo[246536]: pam_unix(sudo:session): session closed for user root
Dec 04 10:35:45 compute-0 sudo[246561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:35:45 compute-0 sudo[246561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:35:45 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:35:45.526 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:35:45 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:35:45.527 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:35:45 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:35:45.527 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:35:45 compute-0 sshd-session[246489]: error: kex_exchange_identification: read: Connection timed out
Dec 04 10:35:45 compute-0 sshd-session[246489]: banner exchange: Connection from 218.13.214.18 port 42884: Connection timed out
Dec 04 10:35:45 compute-0 sudo[246561]: pam_unix(sudo:session): session closed for user root
Dec 04 10:35:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:35:45 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:35:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:35:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:35:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:35:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:35:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:35:45 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:35:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:35:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:35:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:35:45 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:35:46 compute-0 sudo[246617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:35:46 compute-0 sudo[246617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:35:46 compute-0 sudo[246617]: pam_unix(sudo:session): session closed for user root
Dec 04 10:35:46 compute-0 sudo[246642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:35:46 compute-0 sudo[246642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:35:46 compute-0 ceph-mon[75358]: pgmap v747: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:35:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:35:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:35:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:35:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:35:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:35:46 compute-0 podman[246679]: 2025-12-04 10:35:46.414596176 +0000 UTC m=+0.025144054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:35:46 compute-0 podman[246679]: 2025-12-04 10:35:46.918549407 +0000 UTC m=+0.529097265 container create 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:35:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:47 compute-0 systemd[1]: Started libpod-conmon-2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7.scope.
Dec 04 10:35:48 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:35:48 compute-0 podman[246679]: 2025-12-04 10:35:48.13255933 +0000 UTC m=+1.743107248 container init 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:35:48 compute-0 ceph-mon[75358]: pgmap v748: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:48 compute-0 podman[246679]: 2025-12-04 10:35:48.14361229 +0000 UTC m=+1.754160168 container start 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:35:48 compute-0 podman[246679]: 2025-12-04 10:35:48.149268648 +0000 UTC m=+1.759816526 container attach 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:35:48 compute-0 gifted_feistel[246695]: 167 167
Dec 04 10:35:48 compute-0 systemd[1]: libpod-2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7.scope: Deactivated successfully.
Dec 04 10:35:48 compute-0 podman[246679]: 2025-12-04 10:35:48.15632471 +0000 UTC m=+1.766872568 container died 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 04 10:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0173c65e0fb61891059b10a91e08ecae8bdeab822458df5dc0161b530f0aa57-merged.mount: Deactivated successfully.
Dec 04 10:35:48 compute-0 podman[246679]: 2025-12-04 10:35:48.207125721 +0000 UTC m=+1.817673579 container remove 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:35:48 compute-0 systemd[1]: libpod-conmon-2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7.scope: Deactivated successfully.
Dec 04 10:35:48 compute-0 podman[246717]: 2025-12-04 10:35:48.391496191 +0000 UTC m=+0.050537655 container create ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:35:48 compute-0 systemd[1]: Started libpod-conmon-ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d.scope.
Dec 04 10:35:48 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:35:48 compute-0 podman[246717]: 2025-12-04 10:35:48.370737955 +0000 UTC m=+0.029779449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:48 compute-0 podman[246717]: 2025-12-04 10:35:48.476470875 +0000 UTC m=+0.135512359 container init ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:35:48 compute-0 podman[246717]: 2025-12-04 10:35:48.488568431 +0000 UTC m=+0.147609895 container start ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:35:48 compute-0 podman[246717]: 2025-12-04 10:35:48.57948631 +0000 UTC m=+0.238527774 container attach ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:35:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:48 compute-0 charming_grothendieck[246734]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:35:48 compute-0 charming_grothendieck[246734]: --> All data devices are unavailable
Dec 04 10:35:49 compute-0 systemd[1]: libpod-ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d.scope: Deactivated successfully.
Dec 04 10:35:49 compute-0 podman[246754]: 2025-12-04 10:35:49.06863101 +0000 UTC m=+0.026060488 container died ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 04 10:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e-merged.mount: Deactivated successfully.
Dec 04 10:35:49 compute-0 podman[246754]: 2025-12-04 10:35:49.111044385 +0000 UTC m=+0.068473833 container remove ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 04 10:35:49 compute-0 systemd[1]: libpod-conmon-ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d.scope: Deactivated successfully.
Dec 04 10:35:49 compute-0 sudo[246642]: pam_unix(sudo:session): session closed for user root
Dec 04 10:35:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:49 compute-0 sudo[246770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:35:49 compute-0 sudo[246770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:35:49 compute-0 sudo[246770]: pam_unix(sudo:session): session closed for user root
Dec 04 10:35:49 compute-0 sudo[246795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:35:49 compute-0 sudo[246795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:35:49 compute-0 podman[246832]: 2025-12-04 10:35:49.601457696 +0000 UTC m=+0.046232339 container create de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:35:49 compute-0 systemd[1]: Started libpod-conmon-de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58.scope.
Dec 04 10:35:49 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:35:49 compute-0 podman[246832]: 2025-12-04 10:35:49.579794657 +0000 UTC m=+0.024569360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:35:49 compute-0 podman[246832]: 2025-12-04 10:35:49.72987429 +0000 UTC m=+0.174648963 container init de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:35:49 compute-0 podman[246832]: 2025-12-04 10:35:49.737968908 +0000 UTC m=+0.182743561 container start de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:35:49 compute-0 podman[246832]: 2025-12-04 10:35:49.742237582 +0000 UTC m=+0.187012235 container attach de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:35:49 compute-0 blissful_ganguly[246849]: 167 167
Dec 04 10:35:49 compute-0 systemd[1]: libpod-de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58.scope: Deactivated successfully.
Dec 04 10:35:49 compute-0 conmon[246849]: conmon de6fcc1e329d024a6464 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58.scope/container/memory.events
Dec 04 10:35:49 compute-0 podman[246832]: 2025-12-04 10:35:49.745653116 +0000 UTC m=+0.190427789 container died de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b75abd2673fc8e874efa538d18c996052b004f8a5bcc861c5c2424b09417bf8e-merged.mount: Deactivated successfully.
Dec 04 10:35:49 compute-0 podman[246832]: 2025-12-04 10:35:49.786215055 +0000 UTC m=+0.230989708 container remove de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:35:49 compute-0 systemd[1]: libpod-conmon-de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58.scope: Deactivated successfully.
Dec 04 10:35:50 compute-0 podman[246872]: 2025-12-04 10:35:50.036275199 +0000 UTC m=+0.117763345 container create 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:35:50 compute-0 podman[246872]: 2025-12-04 10:35:49.942975821 +0000 UTC m=+0.024463987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:35:50 compute-0 systemd[1]: Started libpod-conmon-57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc.scope.
Dec 04 10:35:50 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:50 compute-0 podman[246872]: 2025-12-04 10:35:50.133866392 +0000 UTC m=+0.215354558 container init 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:35:50 compute-0 podman[246872]: 2025-12-04 10:35:50.141851466 +0000 UTC m=+0.223339612 container start 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:35:50 compute-0 podman[246872]: 2025-12-04 10:35:50.161437974 +0000 UTC m=+0.242926290 container attach 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:35:50 compute-0 ceph-mon[75358]: pgmap v749: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]: {
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:     "0": [
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:         {
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "devices": [
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "/dev/loop3"
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             ],
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_name": "ceph_lv0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_size": "21470642176",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "name": "ceph_lv0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "tags": {
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.cluster_name": "ceph",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.crush_device_class": "",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.encrypted": "0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.objectstore": "bluestore",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.osd_id": "0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.type": "block",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.vdo": "0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.with_tpm": "0"
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             },
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "type": "block",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "vg_name": "ceph_vg0"
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:         }
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:     ],
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:     "1": [
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:         {
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "devices": [
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "/dev/loop4"
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             ],
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_name": "ceph_lv1",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_size": "21470642176",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "name": "ceph_lv1",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "tags": {
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.cluster_name": "ceph",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.crush_device_class": "",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.encrypted": "0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.objectstore": "bluestore",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.osd_id": "1",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.type": "block",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.vdo": "0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.with_tpm": "0"
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             },
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "type": "block",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "vg_name": "ceph_vg1"
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:         }
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:     ],
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:     "2": [
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:         {
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "devices": [
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "/dev/loop5"
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             ],
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_name": "ceph_lv2",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_size": "21470642176",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "name": "ceph_lv2",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "tags": {
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.cluster_name": "ceph",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.crush_device_class": "",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.encrypted": "0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.objectstore": "bluestore",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.osd_id": "2",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.type": "block",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.vdo": "0",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:                 "ceph.with_tpm": "0"
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             },
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "type": "block",
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:             "vg_name": "ceph_vg2"
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:         }
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]:     ]
Dec 04 10:35:50 compute-0 hopeful_bardeen[246889]: }
Dec 04 10:35:50 compute-0 systemd[1]: libpod-57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc.scope: Deactivated successfully.
Dec 04 10:35:50 compute-0 podman[246872]: 2025-12-04 10:35:50.445894038 +0000 UTC m=+0.527382204 container died 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941-merged.mount: Deactivated successfully.
Dec 04 10:35:50 compute-0 podman[246872]: 2025-12-04 10:35:50.482573233 +0000 UTC m=+0.564061379 container remove 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 04 10:35:50 compute-0 systemd[1]: libpod-conmon-57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc.scope: Deactivated successfully.
Dec 04 10:35:50 compute-0 sudo[246795]: pam_unix(sudo:session): session closed for user root
Dec 04 10:35:50 compute-0 sudo[246909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:35:50 compute-0 sudo[246909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:35:50 compute-0 sudo[246909]: pam_unix(sudo:session): session closed for user root
Dec 04 10:35:50 compute-0 sudo[246934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:35:50 compute-0 sudo[246934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:35:50 compute-0 podman[246972]: 2025-12-04 10:35:50.964577899 +0000 UTC m=+0.046302892 container create 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:35:50 compute-0 systemd[1]: Started libpod-conmon-374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c.scope.
Dec 04 10:35:51 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:35:51 compute-0 podman[246972]: 2025-12-04 10:35:50.948596929 +0000 UTC m=+0.030321942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:35:51 compute-0 podman[246972]: 2025-12-04 10:35:51.050346222 +0000 UTC m=+0.132071265 container init 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:35:51 compute-0 podman[246972]: 2025-12-04 10:35:51.05722477 +0000 UTC m=+0.138949763 container start 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:35:51 compute-0 podman[246972]: 2025-12-04 10:35:51.06010799 +0000 UTC m=+0.141832983 container attach 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:35:51 compute-0 distracted_colden[246988]: 167 167
Dec 04 10:35:51 compute-0 systemd[1]: libpod-374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c.scope: Deactivated successfully.
Dec 04 10:35:51 compute-0 podman[246972]: 2025-12-04 10:35:51.063940974 +0000 UTC m=+0.145665967 container died 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e6d95de51da5d38b9d2e37988b9e2c1bfaa2469b86b8007f5b0d824edcb3d77-merged.mount: Deactivated successfully.
Dec 04 10:35:51 compute-0 podman[246972]: 2025-12-04 10:35:51.108246606 +0000 UTC m=+0.189971619 container remove 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:35:51 compute-0 systemd[1]: libpod-conmon-374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c.scope: Deactivated successfully.
Dec 04 10:35:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:51 compute-0 podman[247011]: 2025-12-04 10:35:51.268381814 +0000 UTC m=+0.043335729 container create 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:35:51 compute-0 systemd[1]: Started libpod-conmon-7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887.scope.
Dec 04 10:35:51 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:35:51 compute-0 podman[247011]: 2025-12-04 10:35:51.249449473 +0000 UTC m=+0.024403328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:35:51 compute-0 podman[247011]: 2025-12-04 10:35:51.35177437 +0000 UTC m=+0.126728225 container init 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:35:51 compute-0 podman[247011]: 2025-12-04 10:35:51.359081948 +0000 UTC m=+0.134035773 container start 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:35:51 compute-0 podman[247011]: 2025-12-04 10:35:51.362486391 +0000 UTC m=+0.137440226 container attach 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:35:52 compute-0 lvm[247113]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:35:52 compute-0 lvm[247108]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:35:52 compute-0 lvm[247113]: VG ceph_vg1 finished
Dec 04 10:35:52 compute-0 lvm[247108]: VG ceph_vg0 finished
Dec 04 10:35:52 compute-0 lvm[247115]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:35:52 compute-0 lvm[247115]: VG ceph_vg2 finished
Dec 04 10:35:52 compute-0 lvm[247129]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:35:52 compute-0 lvm[247129]: VG ceph_vg2 finished
Dec 04 10:35:52 compute-0 podman[247103]: 2025-12-04 10:35:52.215986285 +0000 UTC m=+0.084039962 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:35:52 compute-0 nostalgic_joliot[247028]: {}
Dec 04 10:35:52 compute-0 systemd[1]: libpod-7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887.scope: Deactivated successfully.
Dec 04 10:35:52 compute-0 systemd[1]: libpod-7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887.scope: Consumed 1.457s CPU time.
Dec 04 10:35:52 compute-0 podman[247011]: 2025-12-04 10:35:52.267538834 +0000 UTC m=+1.042492669 container died 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:35:52 compute-0 ceph-mon[75358]: pgmap v750: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686-merged.mount: Deactivated successfully.
Dec 04 10:35:52 compute-0 podman[247011]: 2025-12-04 10:35:52.312751117 +0000 UTC m=+1.087704952 container remove 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:35:52 compute-0 systemd[1]: libpod-conmon-7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887.scope: Deactivated successfully.
Dec 04 10:35:52 compute-0 sudo[246934]: pam_unix(sudo:session): session closed for user root
Dec 04 10:35:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:35:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:35:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:35:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:35:52 compute-0 sudo[247146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:35:52 compute-0 sudo[247146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:35:52 compute-0 sudo[247146]: pam_unix(sudo:session): session closed for user root
Dec 04 10:35:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:35:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:35:53 compute-0 ceph-mon[75358]: pgmap v751: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:54 compute-0 sshd-session[247171]: Invalid user radio from 74.249.218.27 port 44760
Dec 04 10:35:54 compute-0 sshd-session[247171]: Received disconnect from 74.249.218.27 port 44760:11: Bye Bye [preauth]
Dec 04 10:35:54 compute-0 sshd-session[247171]: Disconnected from invalid user radio 74.249.218.27 port 44760 [preauth]
Dec 04 10:35:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:35:54.902 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:35:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:35:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:35:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:35:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:35:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:56 compute-0 ceph-mon[75358]: pgmap v752: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:35:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:35:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:35:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:35:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:35:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:35:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:35:59 compute-0 ceph-mon[75358]: pgmap v753: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:35:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:00 compute-0 ceph-mon[75358]: pgmap v754: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:00 compute-0 podman[247174]: 2025-12-04 10:36:00.983905985 +0000 UTC m=+0.083029297 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:36:01 compute-0 podman[247173]: 2025-12-04 10:36:01.003929223 +0000 UTC m=+0.103230540 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 04 10:36:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:01 compute-0 anacron[30888]: Job `cron.weekly' started
Dec 04 10:36:01 compute-0 anacron[30888]: Job `cron.weekly' terminated
Dec 04 10:36:02 compute-0 ceph-mon[75358]: pgmap v755: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:03 compute-0 ceph-mon[75358]: pgmap v756: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:06 compute-0 ceph-mon[75358]: pgmap v757: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:07 compute-0 ceph-mon[75358]: pgmap v758: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:10 compute-0 ceph-mon[75358]: pgmap v759: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:36:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528378920' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:36:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:36:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528378920' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:36:11 compute-0 ceph-mon[75358]: pgmap v760: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/528378920' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:36:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/528378920' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:36:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:13 compute-0 sshd-session[247220]: Invalid user ventas01 from 103.149.86.230 port 38134
Dec 04 10:36:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:14 compute-0 sshd-session[247220]: Received disconnect from 103.149.86.230 port 38134:11: Bye Bye [preauth]
Dec 04 10:36:14 compute-0 sshd-session[247220]: Disconnected from invalid user ventas01 103.149.86.230 port 38134 [preauth]
Dec 04 10:36:14 compute-0 ceph-mon[75358]: pgmap v761: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:16 compute-0 ceph-mon[75358]: pgmap v762: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:18 compute-0 ceph-mon[75358]: pgmap v763: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.306157) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579306206, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1516, "num_deletes": 251, "total_data_size": 2463957, "memory_usage": 2498784, "flush_reason": "Manual Compaction"}
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579329191, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2419087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14868, "largest_seqno": 16383, "table_properties": {"data_size": 2412016, "index_size": 4142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14465, "raw_average_key_size": 19, "raw_value_size": 2397875, "raw_average_value_size": 3271, "num_data_blocks": 189, "num_entries": 733, "num_filter_entries": 733, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844419, "oldest_key_time": 1764844419, "file_creation_time": 1764844579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 23103 microseconds, and 9140 cpu microseconds.
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.329258) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2419087 bytes OK
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.329316) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.332608) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.332629) EVENT_LOG_v1 {"time_micros": 1764844579332623, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.332659) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2457318, prev total WAL file size 2457318, number of live WAL files 2.
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.333720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2362KB)], [35(7136KB)]
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579333811, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9726483, "oldest_snapshot_seqno": -1}
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4056 keys, 7920654 bytes, temperature: kUnknown
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579393987, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7920654, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7891377, "index_size": 18031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99157, "raw_average_key_size": 24, "raw_value_size": 7815815, "raw_average_value_size": 1926, "num_data_blocks": 763, "num_entries": 4056, "num_filter_entries": 4056, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.394310) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7920654 bytes
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.396210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.4 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 7.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 4570, records dropped: 514 output_compression: NoCompression
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.396234) EVENT_LOG_v1 {"time_micros": 1764844579396222, "job": 16, "event": "compaction_finished", "compaction_time_micros": 60258, "compaction_time_cpu_micros": 18332, "output_level": 6, "num_output_files": 1, "total_output_size": 7920654, "num_input_records": 4570, "num_output_records": 4056, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579396721, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579398356, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.333584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:36:19 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:36:20 compute-0 ceph-mon[75358]: pgmap v764: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:22 compute-0 ceph-mon[75358]: pgmap v765: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:22 compute-0 podman[247222]: 2025-12-04 10:36:22.974429098 +0000 UTC m=+0.073250639 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 04 10:36:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:23 compute-0 ceph-mon[75358]: pgmap v766: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:26 compute-0 ceph-mon[75358]: pgmap v767: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:36:26
Dec 04 10:36:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:36:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:36:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'backups', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Dec 04 10:36:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:36:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:36:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:36:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:36:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:36:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:36:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:36:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:36:28 compute-0 ceph-mon[75358]: pgmap v768: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:30 compute-0 ceph-mon[75358]: pgmap v769: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:31 compute-0 podman[247244]: 2025-12-04 10:36:31.970150078 +0000 UTC m=+0.064880466 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:36:31 compute-0 podman[247243]: 2025-12-04 10:36:31.979749662 +0000 UTC m=+0.083299755 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:36:32 compute-0 ceph-mon[75358]: pgmap v770: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:34 compute-0 ceph-mon[75358]: pgmap v771: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:36 compute-0 ceph-mon[75358]: pgmap v772: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:36 compute-0 nova_compute[244644]: 2025-12-04 10:36:36.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:36:36 compute-0 nova_compute[244644]: 2025-12-04 10:36:36.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:36:36 compute-0 nova_compute[244644]: 2025-12-04 10:36:36.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:36:36 compute-0 nova_compute[244644]: 2025-12-04 10:36:36.848 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:36:36 compute-0 nova_compute[244644]: 2025-12-04 10:36:36.848 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:36:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:37 compute-0 ceph-mon[75358]: pgmap v773: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:37 compute-0 nova_compute[244644]: 2025-12-04 10:36:37.833 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:36:37 compute-0 nova_compute[244644]: 2025-12-04 10:36:37.833 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:36:37 compute-0 nova_compute[244644]: 2025-12-04 10:36:37.834 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:36:37 compute-0 nova_compute[244644]: 2025-12-04 10:36:37.834 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:36:37 compute-0 nova_compute[244644]: 2025-12-04 10:36:37.834 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:36:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:36:38 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744744767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:36:38 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3744744767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:36:38 compute-0 nova_compute[244644]: 2025-12-04 10:36:38.403 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:36:38 compute-0 nova_compute[244644]: 2025-12-04 10:36:38.565 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:36:38 compute-0 nova_compute[244644]: 2025-12-04 10:36:38.567 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:36:38 compute-0 nova_compute[244644]: 2025-12-04 10:36:38.567 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:36:38 compute-0 nova_compute[244644]: 2025-12-04 10:36:38.567 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:36:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:39 compute-0 ceph-mon[75358]: pgmap v774: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:39 compute-0 nova_compute[244644]: 2025-12-04 10:36:39.913 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:36:39 compute-0 nova_compute[244644]: 2025-12-04 10:36:39.914 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:36:39 compute-0 nova_compute[244644]: 2025-12-04 10:36:39.934 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:36:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:36:40 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788744034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:36:40 compute-0 nova_compute[244644]: 2025-12-04 10:36:40.548 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:36:40 compute-0 nova_compute[244644]: 2025-12-04 10:36:40.557 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:36:40 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1788744034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:36:40 compute-0 nova_compute[244644]: 2025-12-04 10:36:40.777 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:36:40 compute-0 nova_compute[244644]: 2025-12-04 10:36:40.778 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:36:40 compute-0 nova_compute[244644]: 2025-12-04 10:36:40.778 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:36:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:41 compute-0 nova_compute[244644]: 2025-12-04 10:36:41.268 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:36:41 compute-0 nova_compute[244644]: 2025-12-04 10:36:41.269 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:36:41 compute-0 nova_compute[244644]: 2025-12-04 10:36:41.269 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:36:41 compute-0 nova_compute[244644]: 2025-12-04 10:36:41.269 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:36:41 compute-0 nova_compute[244644]: 2025-12-04 10:36:41.269 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:36:41 compute-0 nova_compute[244644]: 2025-12-04 10:36:41.270 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:36:41 compute-0 nova_compute[244644]: 2025-12-04 10:36:41.270 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:36:41 compute-0 nova_compute[244644]: 2025-12-04 10:36:41.270 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:36:41 compute-0 ceph-mon[75358]: pgmap v775: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:44 compute-0 ceph-mon[75358]: pgmap v776: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:46 compute-0 ceph-mon[75358]: pgmap v777: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:48 compute-0 ceph-mon[75358]: pgmap v778: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:49 compute-0 ceph-mon[75358]: pgmap v779: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:52 compute-0 ceph-mon[75358]: pgmap v780: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:52 compute-0 sudo[247333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:36:52 compute-0 sudo[247333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:52 compute-0 sudo[247333]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:52 compute-0 sudo[247358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 04 10:36:52 compute-0 sudo[247358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:52 compute-0 sudo[247358]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:36:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:36:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:52 compute-0 sudo[247403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:36:52 compute-0 sudo[247403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:52 compute-0 sudo[247403]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:53 compute-0 sudo[247428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:36:53 compute-0 sudo[247428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:53 compute-0 podman[247452]: 2025-12-04 10:36:53.096863617 +0000 UTC m=+0.070108404 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125)
Dec 04 10:36:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:53 compute-0 sudo[247428]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 04 10:36:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:36:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:36:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:36:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:36:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:36:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:36:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:36:53 compute-0 sudo[247505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:36:53 compute-0 sudo[247505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:53 compute-0 sudo[247505]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:53 compute-0 sudo[247530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:36:53 compute-0 sudo[247530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:53 compute-0 ceph-mon[75358]: pgmap v781: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:36:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:36:53 compute-0 podman[247567]: 2025-12-04 10:36:53.920211786 +0000 UTC m=+0.040405366 container create 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:36:53 compute-0 systemd[1]: Started libpod-conmon-659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2.scope.
Dec 04 10:36:53 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:36:53 compute-0 podman[247567]: 2025-12-04 10:36:53.992976886 +0000 UTC m=+0.113170466 container init 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:36:53 compute-0 podman[247567]: 2025-12-04 10:36:53.902408513 +0000 UTC m=+0.022602113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:36:53 compute-0 podman[247567]: 2025-12-04 10:36:53.999216461 +0000 UTC m=+0.119410041 container start 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:36:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:54 compute-0 podman[247567]: 2025-12-04 10:36:54.002744319 +0000 UTC m=+0.122937919 container attach 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:36:54 compute-0 wizardly_sinoussi[247583]: 167 167
Dec 04 10:36:54 compute-0 systemd[1]: libpod-659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2.scope: Deactivated successfully.
Dec 04 10:36:54 compute-0 podman[247567]: 2025-12-04 10:36:54.004151384 +0000 UTC m=+0.124344964 container died 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 04 10:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f873706135dd6ba5060c75cd54ba8a53f1cd5ebf694a0904a96804cf09a5837e-merged.mount: Deactivated successfully.
Dec 04 10:36:54 compute-0 podman[247567]: 2025-12-04 10:36:54.042564099 +0000 UTC m=+0.162757679 container remove 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:36:54 compute-0 systemd[1]: libpod-conmon-659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2.scope: Deactivated successfully.
Dec 04 10:36:54 compute-0 podman[247607]: 2025-12-04 10:36:54.196073488 +0000 UTC m=+0.041422952 container create 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:36:54 compute-0 systemd[1]: Started libpod-conmon-9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a.scope.
Dec 04 10:36:54 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:54 compute-0 podman[247607]: 2025-12-04 10:36:54.178290675 +0000 UTC m=+0.023640169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:54 compute-0 podman[247607]: 2025-12-04 10:36:54.293317736 +0000 UTC m=+0.138667220 container init 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:36:54 compute-0 podman[247607]: 2025-12-04 10:36:54.301017728 +0000 UTC m=+0.146367192 container start 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:36:54 compute-0 podman[247607]: 2025-12-04 10:36:54.307582681 +0000 UTC m=+0.152932175 container attach 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 04 10:36:54 compute-0 modest_ellis[247624]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:36:54 compute-0 modest_ellis[247624]: --> All data devices are unavailable
Dec 04 10:36:54 compute-0 systemd[1]: libpod-9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a.scope: Deactivated successfully.
Dec 04 10:36:54 compute-0 podman[247607]: 2025-12-04 10:36:54.842724902 +0000 UTC m=+0.688074366 container died 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1-merged.mount: Deactivated successfully.
Dec 04 10:36:54 compute-0 podman[247607]: 2025-12-04 10:36:54.888543561 +0000 UTC m=+0.733893025 container remove 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:36:54 compute-0 systemd[1]: libpod-conmon-9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a.scope: Deactivated successfully.
Dec 04 10:36:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:36:54.902 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:36:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:36:54.904 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:36:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:36:54.904 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:36:54 compute-0 sudo[247530]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:54 compute-0 sudo[247656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:36:54 compute-0 sudo[247656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:54 compute-0 sudo[247656]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:55 compute-0 sudo[247681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:36:55 compute-0 sudo[247681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:55 compute-0 podman[247716]: 2025-12-04 10:36:55.332619987 +0000 UTC m=+0.046455947 container create 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:36:55 compute-0 systemd[1]: Started libpod-conmon-9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c.scope.
Dec 04 10:36:55 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:36:55 compute-0 podman[247716]: 2025-12-04 10:36:55.310799394 +0000 UTC m=+0.024635404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:36:55 compute-0 podman[247716]: 2025-12-04 10:36:55.444400156 +0000 UTC m=+0.158236136 container init 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:36:55 compute-0 podman[247716]: 2025-12-04 10:36:55.450324174 +0000 UTC m=+0.164160174 container start 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 10:36:55 compute-0 cranky_neumann[247734]: 167 167
Dec 04 10:36:55 compute-0 systemd[1]: libpod-9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c.scope: Deactivated successfully.
Dec 04 10:36:55 compute-0 podman[247716]: 2025-12-04 10:36:55.454664122 +0000 UTC m=+0.168500082 container attach 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:36:55 compute-0 podman[247716]: 2025-12-04 10:36:55.455266647 +0000 UTC m=+0.169102607 container died 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c580f1e00629821d687b7442c110f02bbb6b0890aa522b6af8365141c09e4a3f-merged.mount: Deactivated successfully.
Dec 04 10:36:55 compute-0 podman[247716]: 2025-12-04 10:36:55.50002666 +0000 UTC m=+0.213862630 container remove 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:36:55 compute-0 systemd[1]: libpod-conmon-9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c.scope: Deactivated successfully.
Dec 04 10:36:55 compute-0 podman[247760]: 2025-12-04 10:36:55.665447174 +0000 UTC m=+0.048963188 container create a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:36:55 compute-0 systemd[1]: Started libpod-conmon-a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033.scope.
Dec 04 10:36:55 compute-0 podman[247760]: 2025-12-04 10:36:55.642504784 +0000 UTC m=+0.026020848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:36:55 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:55 compute-0 podman[247760]: 2025-12-04 10:36:55.775585074 +0000 UTC m=+0.159101178 container init a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:36:55 compute-0 podman[247760]: 2025-12-04 10:36:55.782623029 +0000 UTC m=+0.166139053 container start a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:36:55 compute-0 podman[247760]: 2025-12-04 10:36:55.787014548 +0000 UTC m=+0.170530602 container attach a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 04 10:36:56 compute-0 stoic_gates[247776]: {
Dec 04 10:36:56 compute-0 stoic_gates[247776]:     "0": [
Dec 04 10:36:56 compute-0 stoic_gates[247776]:         {
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "devices": [
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "/dev/loop3"
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             ],
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_name": "ceph_lv0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_size": "21470642176",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "name": "ceph_lv0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "tags": {
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.cluster_name": "ceph",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.crush_device_class": "",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.encrypted": "0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.objectstore": "bluestore",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.osd_id": "0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.type": "block",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.vdo": "0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.with_tpm": "0"
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             },
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "type": "block",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "vg_name": "ceph_vg0"
Dec 04 10:36:56 compute-0 stoic_gates[247776]:         }
Dec 04 10:36:56 compute-0 stoic_gates[247776]:     ],
Dec 04 10:36:56 compute-0 stoic_gates[247776]:     "1": [
Dec 04 10:36:56 compute-0 stoic_gates[247776]:         {
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "devices": [
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "/dev/loop4"
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             ],
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_name": "ceph_lv1",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_size": "21470642176",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "name": "ceph_lv1",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "tags": {
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.cluster_name": "ceph",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.crush_device_class": "",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.encrypted": "0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.objectstore": "bluestore",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.osd_id": "1",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.type": "block",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.vdo": "0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.with_tpm": "0"
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             },
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "type": "block",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "vg_name": "ceph_vg1"
Dec 04 10:36:56 compute-0 stoic_gates[247776]:         }
Dec 04 10:36:56 compute-0 stoic_gates[247776]:     ],
Dec 04 10:36:56 compute-0 stoic_gates[247776]:     "2": [
Dec 04 10:36:56 compute-0 stoic_gates[247776]:         {
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "devices": [
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "/dev/loop5"
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             ],
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_name": "ceph_lv2",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_size": "21470642176",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "name": "ceph_lv2",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "tags": {
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.cluster_name": "ceph",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.crush_device_class": "",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.encrypted": "0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.objectstore": "bluestore",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.osd_id": "2",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.type": "block",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.vdo": "0",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:                 "ceph.with_tpm": "0"
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             },
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "type": "block",
Dec 04 10:36:56 compute-0 stoic_gates[247776]:             "vg_name": "ceph_vg2"
Dec 04 10:36:56 compute-0 stoic_gates[247776]:         }
Dec 04 10:36:56 compute-0 stoic_gates[247776]:     ]
Dec 04 10:36:56 compute-0 stoic_gates[247776]: }
Dec 04 10:36:56 compute-0 systemd[1]: libpod-a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033.scope: Deactivated successfully.
Dec 04 10:36:56 compute-0 podman[247760]: 2025-12-04 10:36:56.09664807 +0000 UTC m=+0.480164124 container died a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8-merged.mount: Deactivated successfully.
Dec 04 10:36:56 compute-0 podman[247760]: 2025-12-04 10:36:56.149704599 +0000 UTC m=+0.533220623 container remove a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 04 10:36:56 compute-0 systemd[1]: libpod-conmon-a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033.scope: Deactivated successfully.
Dec 04 10:36:56 compute-0 sudo[247681]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:56 compute-0 sudo[247796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:36:56 compute-0 sudo[247796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:56 compute-0 sudo[247796]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:56 compute-0 ceph-mon[75358]: pgmap v782: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:36:56 compute-0 sudo[247821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:36:56 compute-0 sudo[247821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:56 compute-0 podman[247858]: 2025-12-04 10:36:56.602932923 +0000 UTC m=+0.037790631 container create 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:36:56 compute-0 systemd[1]: Started libpod-conmon-21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c.scope.
Dec 04 10:36:56 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:36:56 compute-0 podman[247858]: 2025-12-04 10:36:56.586139395 +0000 UTC m=+0.020997133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:36:56 compute-0 podman[247858]: 2025-12-04 10:36:56.683776794 +0000 UTC m=+0.118634522 container init 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:36:56 compute-0 podman[247858]: 2025-12-04 10:36:56.689819974 +0000 UTC m=+0.124677682 container start 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:36:56 compute-0 podman[247858]: 2025-12-04 10:36:56.693467115 +0000 UTC m=+0.128324843 container attach 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 10:36:56 compute-0 systemd[1]: libpod-21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c.scope: Deactivated successfully.
Dec 04 10:36:56 compute-0 relaxed_poincare[247874]: 167 167
Dec 04 10:36:56 compute-0 conmon[247874]: conmon 21b0251b30beb788bdf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c.scope/container/memory.events
Dec 04 10:36:56 compute-0 podman[247858]: 2025-12-04 10:36:56.69529406 +0000 UTC m=+0.130151768 container died 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-81e3c92a34f5f749f7f8f35f9396938ced23bb76339901e02eeea13e943bab19-merged.mount: Deactivated successfully.
Dec 04 10:36:56 compute-0 podman[247858]: 2025-12-04 10:36:56.7367337 +0000 UTC m=+0.171591438 container remove 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:36:56 compute-0 systemd[1]: libpod-conmon-21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c.scope: Deactivated successfully.
Dec 04 10:36:56 compute-0 podman[247896]: 2025-12-04 10:36:56.938240852 +0000 UTC m=+0.065180061 container create c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:36:56 compute-0 systemd[1]: Started libpod-conmon-c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40.scope.
Dec 04 10:36:57 compute-0 podman[247896]: 2025-12-04 10:36:56.911693712 +0000 UTC m=+0.038633001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:36:57 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:36:57 compute-0 podman[247896]: 2025-12-04 10:36:57.043987243 +0000 UTC m=+0.170926522 container init c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:36:57 compute-0 podman[247896]: 2025-12-04 10:36:57.055088819 +0000 UTC m=+0.182028078 container start c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 04 10:36:57 compute-0 podman[247896]: 2025-12-04 10:36:57.059373156 +0000 UTC m=+0.186312385 container attach c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:36:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 04 10:36:57 compute-0 lvm[247991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:36:57 compute-0 lvm[247991]: VG ceph_vg0 finished
Dec 04 10:36:57 compute-0 lvm[247993]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:36:57 compute-0 lvm[247993]: VG ceph_vg1 finished
Dec 04 10:36:57 compute-0 lvm[247995]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:36:57 compute-0 lvm[247995]: VG ceph_vg2 finished
Dec 04 10:36:57 compute-0 lvm[247996]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:36:57 compute-0 lvm[247996]: VG ceph_vg1 finished
Dec 04 10:36:57 compute-0 tender_nightingale[247914]: {}
Dec 04 10:36:57 compute-0 systemd[1]: libpod-c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40.scope: Deactivated successfully.
Dec 04 10:36:57 compute-0 systemd[1]: libpod-c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40.scope: Consumed 1.315s CPU time.
Dec 04 10:36:57 compute-0 podman[247896]: 2025-12-04 10:36:57.856630655 +0000 UTC m=+0.983569864 container died c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e-merged.mount: Deactivated successfully.
Dec 04 10:36:57 compute-0 podman[247896]: 2025-12-04 10:36:57.901224015 +0000 UTC m=+1.028163224 container remove c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:36:57 compute-0 systemd[1]: libpod-conmon-c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40.scope: Deactivated successfully.
Dec 04 10:36:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:36:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:36:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:36:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:36:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:36:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:36:57 compute-0 sudo[247821]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:36:58 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:36:58 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:58 compute-0 ceph-mon[75358]: pgmap v783: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 04 10:36:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:58 compute-0 sudo[248014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:36:58 compute-0 sudo[248014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:36:58 compute-0 sudo[248014]: pam_unix(sudo:session): session closed for user root
Dec 04 10:36:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:36:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:36:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:36:59 compute-0 ceph-mon[75358]: pgmap v784: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:36:59 compute-0 sshd-session[248039]: Invalid user master from 107.175.213.239 port 41488
Dec 04 10:36:59 compute-0 sshd-session[248039]: Received disconnect from 107.175.213.239 port 41488:11: Bye Bye [preauth]
Dec 04 10:36:59 compute-0 sshd-session[248039]: Disconnected from invalid user master 107.175.213.239 port 41488 [preauth]
Dec 04 10:36:59 compute-0 sshd-session[248012]: Invalid user admin from 103.179.218.243 port 42660
Dec 04 10:36:59 compute-0 sshd-session[248012]: Received disconnect from 103.179.218.243 port 42660:11: Bye Bye [preauth]
Dec 04 10:36:59 compute-0 sshd-session[248012]: Disconnected from invalid user admin 103.179.218.243 port 42660 [preauth]
Dec 04 10:37:00 compute-0 sshd-session[248041]: Invalid user radio from 217.154.62.22 port 36670
Dec 04 10:37:01 compute-0 sshd-session[248041]: Received disconnect from 217.154.62.22 port 36670:11: Bye Bye [preauth]
Dec 04 10:37:01 compute-0 sshd-session[248041]: Disconnected from invalid user radio 217.154.62.22 port 36670 [preauth]
Dec 04 10:37:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:37:02 compute-0 ceph-mon[75358]: pgmap v785: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:37:02 compute-0 podman[248044]: 2025-12-04 10:37:02.963149138 +0000 UTC m=+0.065061748 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 04 10:37:02 compute-0 podman[248043]: 2025-12-04 10:37:02.990745225 +0000 UTC m=+0.091834794 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Dec 04 10:37:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:37:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:04 compute-0 ceph-mon[75358]: pgmap v786: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:37:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:37:06 compute-0 ceph-mon[75358]: pgmap v787: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:37:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:37:08 compute-0 ceph-mon[75358]: pgmap v788: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:37:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 04 10:37:10 compute-0 ceph-mon[75358]: pgmap v789: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 04 10:37:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:11 compute-0 ceph-mon[75358]: pgmap v790: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:37:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833328030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:37:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:37:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833328030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:37:12 compute-0 sshd-session[248085]: Invalid user deploy from 74.249.218.27 port 36174
Dec 04 10:37:12 compute-0 sshd-session[248085]: Received disconnect from 74.249.218.27 port 36174:11: Bye Bye [preauth]
Dec 04 10:37:12 compute-0 sshd-session[248085]: Disconnected from invalid user deploy 74.249.218.27 port 36174 [preauth]
Dec 04 10:37:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/833328030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:37:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/833328030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:37:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:13 compute-0 ceph-mon[75358]: pgmap v791: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:16 compute-0 ceph-mon[75358]: pgmap v792: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:18 compute-0 ceph-mon[75358]: pgmap v793: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:20 compute-0 ceph-mon[75358]: pgmap v794: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:22 compute-0 ceph-mon[75358]: pgmap v795: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:23 compute-0 podman[248089]: 2025-12-04 10:37:23.93922811 +0000 UTC m=+0.049026011 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 04 10:37:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:24 compute-0 ceph-mon[75358]: pgmap v796: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:26 compute-0 sshd-session[248111]: Invalid user kingbase from 103.149.86.230 port 57940
Dec 04 10:37:26 compute-0 ceph-mon[75358]: pgmap v797: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:26 compute-0 sshd-session[248111]: Received disconnect from 103.149.86.230 port 57940:11: Bye Bye [preauth]
Dec 04 10:37:26 compute-0 sshd-session[248111]: Disconnected from invalid user kingbase 103.149.86.230 port 57940 [preauth]
Dec 04 10:37:26 compute-0 sshd-session[248087]: Invalid user alex from 101.47.163.20 port 47280
Dec 04 10:37:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:37:26
Dec 04 10:37:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:37:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:37:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'default.rgw.control']
Dec 04 10:37:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:37:26 compute-0 sshd-session[248087]: Received disconnect from 101.47.163.20 port 47280:11: Bye Bye [preauth]
Dec 04 10:37:26 compute-0 sshd-session[248087]: Disconnected from invalid user alex 101.47.163.20 port 47280 [preauth]
Dec 04 10:37:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:27 compute-0 ceph-mon[75358]: pgmap v798: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:37:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:37:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:37:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:37:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:37:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:37:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:37:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:30 compute-0 ceph-mon[75358]: pgmap v799: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:32 compute-0 ceph-mon[75358]: pgmap v800: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:33 compute-0 podman[248114]: 2025-12-04 10:37:33.942119399 +0000 UTC m=+0.053314307 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Dec 04 10:37:33 compute-0 podman[248113]: 2025-12-04 10:37:33.975458458 +0000 UTC m=+0.086653226 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 04 10:37:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:34 compute-0 ceph-mon[75358]: pgmap v801: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:36 compute-0 ceph-mon[75358]: pgmap v802: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:37:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:37 compute-0 nova_compute[244644]: 2025-12-04 10:37:37.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:37 compute-0 nova_compute[244644]: 2025-12-04 10:37:37.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:37:37 compute-0 nova_compute[244644]: 2025-12-04 10:37:37.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:37:37 compute-0 ceph-mon[75358]: pgmap v803: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:37 compute-0 nova_compute[244644]: 2025-12-04 10:37:37.353 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.360 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.360 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.360 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.394 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.394 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.394 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:37:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:37:38 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614907084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:37:38 compute-0 nova_compute[244644]: 2025-12-04 10:37:38.946 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:37:38 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3614907084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:37:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.149 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.150 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.150 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.151 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.208 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.209 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.225 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:37:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:37:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650420587' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.746 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.751 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.768 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.771 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:37:39 compute-0 nova_compute[244644]: 2025-12-04 10:37:39.771 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:37:39 compute-0 ceph-mon[75358]: pgmap v804: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:39 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1650420587' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:37:40 compute-0 nova_compute[244644]: 2025-12-04 10:37:40.750 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:40 compute-0 nova_compute[244644]: 2025-12-04 10:37:40.750 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:40 compute-0 nova_compute[244644]: 2025-12-04 10:37:40.750 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:40 compute-0 nova_compute[244644]: 2025-12-04 10:37:40.751 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:40 compute-0 nova_compute[244644]: 2025-12-04 10:37:40.751 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:37:40 compute-0 nova_compute[244644]: 2025-12-04 10:37:40.751 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:37:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:41 compute-0 ceph-mon[75358]: pgmap v805: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:44 compute-0 ceph-mon[75358]: pgmap v806: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:46 compute-0 ceph-mon[75358]: pgmap v807: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:48 compute-0 ceph-mon[75358]: pgmap v808: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:50 compute-0 ceph-mon[75358]: pgmap v809: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:51 compute-0 ceph-mon[75358]: pgmap v810: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:54 compute-0 ceph-mon[75358]: pgmap v811: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:37:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:37:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:37:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:37:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:37:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:37:54 compute-0 podman[248202]: 2025-12-04 10:37:54.944007322 +0000 UTC m=+0.056739583 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 04 10:37:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:56 compute-0 ceph-mon[75358]: pgmap v812: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:37:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:37:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:37:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:37:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:37:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:37:58 compute-0 ceph-mon[75358]: pgmap v813: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:58 compute-0 sudo[248222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:37:58 compute-0 sudo[248222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:37:58 compute-0 sudo[248222]: pam_unix(sudo:session): session closed for user root
Dec 04 10:37:58 compute-0 sudo[248247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:37:58 compute-0 sudo[248247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:37:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:37:59 compute-0 sudo[248247]: pam_unix(sudo:session): session closed for user root
Dec 04 10:37:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:37:59 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:37:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:37:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:37:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:37:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:37:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:37:59 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:37:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:37:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:37:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:37:59 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:37:59 compute-0 sudo[248303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:37:59 compute-0 sudo[248303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:37:59 compute-0 sudo[248303]: pam_unix(sudo:session): session closed for user root
Dec 04 10:37:59 compute-0 sudo[248328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:37:59 compute-0 sudo[248328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:37:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:37:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:37:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:37:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:37:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:37:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:37:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:37:59 compute-0 podman[248367]: 2025-12-04 10:37:59.49392682 +0000 UTC m=+0.040366454 container create 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:37:59 compute-0 systemd[1]: Started libpod-conmon-0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b.scope.
Dec 04 10:37:59 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:37:59 compute-0 podman[248367]: 2025-12-04 10:37:59.476504337 +0000 UTC m=+0.022943991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:37:59 compute-0 podman[248367]: 2025-12-04 10:37:59.585429077 +0000 UTC m=+0.131868721 container init 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:37:59 compute-0 podman[248367]: 2025-12-04 10:37:59.592600635 +0000 UTC m=+0.139040279 container start 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:37:59 compute-0 podman[248367]: 2025-12-04 10:37:59.596430161 +0000 UTC m=+0.142869845 container attach 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:37:59 compute-0 elated_zhukovsky[248384]: 167 167
Dec 04 10:37:59 compute-0 systemd[1]: libpod-0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b.scope: Deactivated successfully.
Dec 04 10:37:59 compute-0 podman[248367]: 2025-12-04 10:37:59.60365936 +0000 UTC m=+0.150099024 container died 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 04 10:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d529858f511ac93dc1c58077fa50a44d8cbd035f153a5b51833fd1abad50ee7f-merged.mount: Deactivated successfully.
Dec 04 10:37:59 compute-0 podman[248367]: 2025-12-04 10:37:59.6506984 +0000 UTC m=+0.197138034 container remove 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 04 10:37:59 compute-0 systemd[1]: libpod-conmon-0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b.scope: Deactivated successfully.
Dec 04 10:37:59 compute-0 podman[248407]: 2025-12-04 10:37:59.812014283 +0000 UTC m=+0.049327209 container create f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:37:59 compute-0 systemd[1]: Started libpod-conmon-f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99.scope.
Dec 04 10:37:59 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:37:59 compute-0 podman[248407]: 2025-12-04 10:37:59.875179914 +0000 UTC m=+0.112492880 container init f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:37:59 compute-0 podman[248407]: 2025-12-04 10:37:59.882152177 +0000 UTC m=+0.119465113 container start f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:37:59 compute-0 podman[248407]: 2025-12-04 10:37:59.885863369 +0000 UTC m=+0.123176305 container attach f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:37:59 compute-0 podman[248407]: 2025-12-04 10:37:59.794843405 +0000 UTC m=+0.032156361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:38:00 compute-0 wizardly_jackson[248424]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:38:00 compute-0 wizardly_jackson[248424]: --> All data devices are unavailable
Dec 04 10:38:00 compute-0 systemd[1]: libpod-f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99.scope: Deactivated successfully.
Dec 04 10:38:00 compute-0 podman[248407]: 2025-12-04 10:38:00.325302579 +0000 UTC m=+0.562615515 container died f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b-merged.mount: Deactivated successfully.
Dec 04 10:38:00 compute-0 ceph-mon[75358]: pgmap v814: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:00 compute-0 podman[248407]: 2025-12-04 10:38:00.369853427 +0000 UTC m=+0.607166363 container remove f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:38:00 compute-0 systemd[1]: libpod-conmon-f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99.scope: Deactivated successfully.
Dec 04 10:38:00 compute-0 sudo[248328]: pam_unix(sudo:session): session closed for user root
Dec 04 10:38:00 compute-0 sudo[248457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:38:00 compute-0 sudo[248457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:38:00 compute-0 sudo[248457]: pam_unix(sudo:session): session closed for user root
Dec 04 10:38:00 compute-0 sudo[248482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:38:00 compute-0 sudo[248482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:38:00 compute-0 podman[248518]: 2025-12-04 10:38:00.8435786 +0000 UTC m=+0.040782875 container create fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:38:00 compute-0 systemd[1]: Started libpod-conmon-fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723.scope.
Dec 04 10:38:00 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:38:00 compute-0 podman[248518]: 2025-12-04 10:38:00.91876289 +0000 UTC m=+0.115967175 container init fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:38:00 compute-0 podman[248518]: 2025-12-04 10:38:00.825229994 +0000 UTC m=+0.022434269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:38:00 compute-0 podman[248518]: 2025-12-04 10:38:00.93198236 +0000 UTC m=+0.129186625 container start fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:38:00 compute-0 podman[248518]: 2025-12-04 10:38:00.936491151 +0000 UTC m=+0.133695416 container attach fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:38:00 compute-0 dazzling_ramanujan[248534]: 167 167
Dec 04 10:38:00 compute-0 systemd[1]: libpod-fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723.scope: Deactivated successfully.
Dec 04 10:38:00 compute-0 podman[248518]: 2025-12-04 10:38:00.941181498 +0000 UTC m=+0.138385763 container died fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-82a600fac7b2c7ecc358b1e3caf7fb3697683fe07da5142975092eed770946e3-merged.mount: Deactivated successfully.
Dec 04 10:38:00 compute-0 podman[248518]: 2025-12-04 10:38:00.9951204 +0000 UTC m=+0.192324665 container remove fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:38:01 compute-0 systemd[1]: libpod-conmon-fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723.scope: Deactivated successfully.
Dec 04 10:38:01 compute-0 podman[248559]: 2025-12-04 10:38:01.156648307 +0000 UTC m=+0.046268482 container create f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:38:01 compute-0 systemd[1]: Started libpod-conmon-f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e.scope.
Dec 04 10:38:01 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:38:01 compute-0 podman[248559]: 2025-12-04 10:38:01.137547602 +0000 UTC m=+0.027167797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:38:01 compute-0 podman[248559]: 2025-12-04 10:38:01.239134669 +0000 UTC m=+0.128754904 container init f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:38:01 compute-0 podman[248559]: 2025-12-04 10:38:01.249475886 +0000 UTC m=+0.139096081 container start f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:38:01 compute-0 podman[248559]: 2025-12-04 10:38:01.254000539 +0000 UTC m=+0.143620724 container attach f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:38:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:01 compute-0 ceph-mon[75358]: pgmap v815: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:01 compute-0 eager_fermi[248575]: {
Dec 04 10:38:01 compute-0 eager_fermi[248575]:     "0": [
Dec 04 10:38:01 compute-0 eager_fermi[248575]:         {
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "devices": [
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "/dev/loop3"
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             ],
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_name": "ceph_lv0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_size": "21470642176",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "name": "ceph_lv0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "tags": {
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.cluster_name": "ceph",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.crush_device_class": "",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.encrypted": "0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.objectstore": "bluestore",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.osd_id": "0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.type": "block",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.vdo": "0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.with_tpm": "0"
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             },
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "type": "block",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "vg_name": "ceph_vg0"
Dec 04 10:38:01 compute-0 eager_fermi[248575]:         }
Dec 04 10:38:01 compute-0 eager_fermi[248575]:     ],
Dec 04 10:38:01 compute-0 eager_fermi[248575]:     "1": [
Dec 04 10:38:01 compute-0 eager_fermi[248575]:         {
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "devices": [
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "/dev/loop4"
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             ],
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_name": "ceph_lv1",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_size": "21470642176",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "name": "ceph_lv1",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "tags": {
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.cluster_name": "ceph",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.crush_device_class": "",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.encrypted": "0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.objectstore": "bluestore",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.osd_id": "1",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.type": "block",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.vdo": "0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.with_tpm": "0"
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             },
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "type": "block",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "vg_name": "ceph_vg1"
Dec 04 10:38:01 compute-0 eager_fermi[248575]:         }
Dec 04 10:38:01 compute-0 eager_fermi[248575]:     ],
Dec 04 10:38:01 compute-0 eager_fermi[248575]:     "2": [
Dec 04 10:38:01 compute-0 eager_fermi[248575]:         {
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "devices": [
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "/dev/loop5"
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             ],
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_name": "ceph_lv2",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_size": "21470642176",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "name": "ceph_lv2",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "tags": {
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.cluster_name": "ceph",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.crush_device_class": "",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.encrypted": "0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.objectstore": "bluestore",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.osd_id": "2",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.type": "block",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.vdo": "0",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:                 "ceph.with_tpm": "0"
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             },
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "type": "block",
Dec 04 10:38:01 compute-0 eager_fermi[248575]:             "vg_name": "ceph_vg2"
Dec 04 10:38:01 compute-0 eager_fermi[248575]:         }
Dec 04 10:38:01 compute-0 eager_fermi[248575]:     ]
Dec 04 10:38:01 compute-0 eager_fermi[248575]: }
Dec 04 10:38:01 compute-0 systemd[1]: libpod-f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e.scope: Deactivated successfully.
Dec 04 10:38:01 compute-0 podman[248559]: 2025-12-04 10:38:01.554901802 +0000 UTC m=+0.444521957 container died f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21-merged.mount: Deactivated successfully.
Dec 04 10:38:01 compute-0 podman[248559]: 2025-12-04 10:38:01.5926123 +0000 UTC m=+0.482232455 container remove f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 04 10:38:01 compute-0 systemd[1]: libpod-conmon-f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e.scope: Deactivated successfully.
Dec 04 10:38:01 compute-0 sudo[248482]: pam_unix(sudo:session): session closed for user root
Dec 04 10:38:01 compute-0 sudo[248595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:38:01 compute-0 sudo[248595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:38:01 compute-0 sudo[248595]: pam_unix(sudo:session): session closed for user root
Dec 04 10:38:01 compute-0 sudo[248620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:38:01 compute-0 sudo[248620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:38:02 compute-0 podman[248658]: 2025-12-04 10:38:02.088450443 +0000 UTC m=+0.064004283 container create ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:38:02 compute-0 podman[248658]: 2025-12-04 10:38:02.049355821 +0000 UTC m=+0.024909561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:38:02 compute-0 systemd[1]: Started libpod-conmon-ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9.scope.
Dec 04 10:38:02 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:38:02 compute-0 podman[248658]: 2025-12-04 10:38:02.21016643 +0000 UTC m=+0.185720150 container init ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 04 10:38:02 compute-0 podman[248658]: 2025-12-04 10:38:02.216442616 +0000 UTC m=+0.191996336 container start ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:38:02 compute-0 podman[248658]: 2025-12-04 10:38:02.220242241 +0000 UTC m=+0.195795981 container attach ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:38:02 compute-0 modest_rhodes[248674]: 167 167
Dec 04 10:38:02 compute-0 systemd[1]: libpod-ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9.scope: Deactivated successfully.
Dec 04 10:38:02 compute-0 podman[248658]: 2025-12-04 10:38:02.222409185 +0000 UTC m=+0.197962915 container died ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 04 10:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb0e01e663b8ba6ffca569b4b0b3fd79f222512fe7b69bf06f124db7041594f1-merged.mount: Deactivated successfully.
Dec 04 10:38:02 compute-0 podman[248658]: 2025-12-04 10:38:02.262635915 +0000 UTC m=+0.238189675 container remove ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:38:02 compute-0 systemd[1]: libpod-conmon-ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9.scope: Deactivated successfully.
Dec 04 10:38:02 compute-0 podman[248699]: 2025-12-04 10:38:02.424809659 +0000 UTC m=+0.045161764 container create 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:38:02 compute-0 systemd[1]: Started libpod-conmon-727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05.scope.
Dec 04 10:38:02 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:38:02 compute-0 podman[248699]: 2025-12-04 10:38:02.403301834 +0000 UTC m=+0.023653969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:38:02 compute-0 podman[248699]: 2025-12-04 10:38:02.508558063 +0000 UTC m=+0.128910178 container init 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:38:02 compute-0 podman[248699]: 2025-12-04 10:38:02.520822547 +0000 UTC m=+0.141174672 container start 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:38:02 compute-0 podman[248699]: 2025-12-04 10:38:02.524522699 +0000 UTC m=+0.144874824 container attach 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:38:03 compute-0 lvm[248795]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:38:03 compute-0 lvm[248794]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:38:03 compute-0 lvm[248795]: VG ceph_vg1 finished
Dec 04 10:38:03 compute-0 lvm[248794]: VG ceph_vg0 finished
Dec 04 10:38:03 compute-0 lvm[248797]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:38:03 compute-0 lvm[248797]: VG ceph_vg2 finished
Dec 04 10:38:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:03 compute-0 happy_curie[248716]: {}
Dec 04 10:38:03 compute-0 systemd[1]: libpod-727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05.scope: Deactivated successfully.
Dec 04 10:38:03 compute-0 systemd[1]: libpod-727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05.scope: Consumed 1.298s CPU time.
Dec 04 10:38:03 compute-0 podman[248699]: 2025-12-04 10:38:03.343574391 +0000 UTC m=+0.963926506 container died 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:38:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878-merged.mount: Deactivated successfully.
Dec 04 10:38:03 compute-0 podman[248699]: 2025-12-04 10:38:03.388502259 +0000 UTC m=+1.008854354 container remove 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 04 10:38:03 compute-0 systemd[1]: libpod-conmon-727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05.scope: Deactivated successfully.
Dec 04 10:38:03 compute-0 sudo[248620]: pam_unix(sudo:session): session closed for user root
Dec 04 10:38:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:38:03 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:38:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:38:03 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:38:03 compute-0 sudo[248813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:38:03 compute-0 sudo[248813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:38:03 compute-0 sudo[248813]: pam_unix(sudo:session): session closed for user root
Dec 04 10:38:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:04 compute-0 ceph-mon[75358]: pgmap v816: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:04 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:38:04 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:38:04 compute-0 podman[248839]: 2025-12-04 10:38:04.951525164 +0000 UTC m=+0.054413453 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:38:04 compute-0 podman[248838]: 2025-12-04 10:38:04.984998257 +0000 UTC m=+0.088741628 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 04 10:38:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:06 compute-0 ceph-mon[75358]: pgmap v817: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:08 compute-0 ceph-mon[75358]: pgmap v818: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:09 compute-0 ceph-mon[75358]: pgmap v819: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:38:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1913347555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:38:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:38:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1913347555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:38:12 compute-0 ceph-mon[75358]: pgmap v820: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1913347555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:38:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1913347555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:38:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:14 compute-0 ceph-mon[75358]: pgmap v821: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:16 compute-0 ceph-mon[75358]: pgmap v822: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:18 compute-0 ceph-mon[75358]: pgmap v823: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:19 compute-0 ceph-mon[75358]: pgmap v824: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:22 compute-0 ceph-mon[75358]: pgmap v825: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:22 compute-0 sshd-session[248886]: Invalid user git from 45.140.17.124 port 53974
Dec 04 10:38:23 compute-0 sshd-session[248886]: Connection reset by invalid user git 45.140.17.124 port 53974 [preauth]
Dec 04 10:38:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:24 compute-0 ceph-mon[75358]: pgmap v826: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:25 compute-0 sshd-session[248888]: Invalid user admin from 45.140.17.124 port 53990
Dec 04 10:38:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:25 compute-0 podman[248890]: 2025-12-04 10:38:25.325481909 +0000 UTC m=+0.062610219 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 04 10:38:25 compute-0 sshd-session[248888]: Connection reset by invalid user admin 45.140.17.124 port 53990 [preauth]
Dec 04 10:38:26 compute-0 sshd-session[248912]: Invalid user syncthing from 74.249.218.27 port 57160
Dec 04 10:38:26 compute-0 sshd-session[248912]: Received disconnect from 74.249.218.27 port 57160:11: Bye Bye [preauth]
Dec 04 10:38:26 compute-0 sshd-session[248912]: Disconnected from invalid user syncthing 74.249.218.27 port 57160 [preauth]
Dec 04 10:38:26 compute-0 ceph-mon[75358]: pgmap v827: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:38:26
Dec 04 10:38:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:38:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:38:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Dec 04 10:38:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:38:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:38:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:38:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:38:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:38:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:38:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:38:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:38:28 compute-0 sshd-session[248914]: Connection reset by authenticating user root 45.140.17.124 port 54022 [preauth]
Dec 04 10:38:28 compute-0 ceph-mon[75358]: pgmap v828: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:29 compute-0 sshd-session[248917]: Invalid user azureuser from 217.154.62.22 port 35356
Dec 04 10:38:29 compute-0 sshd-session[248917]: Received disconnect from 217.154.62.22 port 35356:11: Bye Bye [preauth]
Dec 04 10:38:29 compute-0 sshd-session[248917]: Disconnected from invalid user azureuser 217.154.62.22 port 35356 [preauth]
Dec 04 10:38:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:29 compute-0 ceph-mon[75358]: pgmap v829: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:30 compute-0 sshd-session[248920]: Invalid user root2 from 103.179.218.243 port 42768
Dec 04 10:38:30 compute-0 sshd-session[248916]: Connection reset by authenticating user root 45.140.17.124 port 54072 [preauth]
Dec 04 10:38:30 compute-0 sshd-session[248920]: Received disconnect from 103.179.218.243 port 42768:11: Bye Bye [preauth]
Dec 04 10:38:30 compute-0 sshd-session[248920]: Disconnected from invalid user root2 103.179.218.243 port 42768 [preauth]
Dec 04 10:38:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:32 compute-0 ceph-mon[75358]: pgmap v830: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:33 compute-0 sshd-session[248922]: Connection reset by authenticating user root 45.140.17.124 port 41264 [preauth]
Dec 04 10:38:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:34 compute-0 ceph-mon[75358]: pgmap v831: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:35 compute-0 podman[248925]: 2025-12-04 10:38:35.943564037 +0000 UTC m=+0.053099622 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 04 10:38:35 compute-0 podman[248924]: 2025-12-04 10:38:35.978291691 +0000 UTC m=+0.090486872 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:38:36 compute-0 nova_compute[244644]: 2025-12-04 10:38:36.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:36 compute-0 nova_compute[244644]: 2025-12-04 10:38:36.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 04 10:38:36 compute-0 ceph-mon[75358]: pgmap v832: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:36 compute-0 nova_compute[244644]: 2025-12-04 10:38:36.361 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 04 10:38:36 compute-0 nova_compute[244644]: 2025-12-04 10:38:36.363 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:36 compute-0 nova_compute[244644]: 2025-12-04 10:38:36.363 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 04 10:38:36 compute-0 nova_compute[244644]: 2025-12-04 10:38:36.380 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:38:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:37 compute-0 ceph-mon[75358]: pgmap v833: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:38 compute-0 sshd-session[248969]: Invalid user gns3 from 103.149.86.230 port 45802
Dec 04 10:38:38 compute-0 nova_compute[244644]: 2025-12-04 10:38:38.396 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:38 compute-0 nova_compute[244644]: 2025-12-04 10:38:38.396 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:38:38 compute-0 nova_compute[244644]: 2025-12-04 10:38:38.397 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:38:38 compute-0 nova_compute[244644]: 2025-12-04 10:38:38.413 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:38:38 compute-0 sshd-session[248969]: Received disconnect from 103.149.86.230 port 45802:11: Bye Bye [preauth]
Dec 04 10:38:38 compute-0 sshd-session[248969]: Disconnected from invalid user gns3 103.149.86.230 port 45802 [preauth]
Dec 04 10:38:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:39 compute-0 nova_compute[244644]: 2025-12-04 10:38:39.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:39 compute-0 nova_compute[244644]: 2025-12-04 10:38:39.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:39 compute-0 nova_compute[244644]: 2025-12-04 10:38:39.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:39 compute-0 nova_compute[244644]: 2025-12-04 10:38:39.366 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:38:39 compute-0 nova_compute[244644]: 2025-12-04 10:38:39.366 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:38:39 compute-0 nova_compute[244644]: 2025-12-04 10:38:39.367 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:38:39 compute-0 nova_compute[244644]: 2025-12-04 10:38:39.368 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:38:39 compute-0 nova_compute[244644]: 2025-12-04 10:38:39.368 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:38:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:38:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2230826318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:38:39 compute-0 nova_compute[244644]: 2025-12-04 10:38:39.912 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.074 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.075 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.075 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.075 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.287 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.288 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:38:40 compute-0 ceph-mon[75358]: pgmap v834: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:40 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2230826318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.364 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing inventories for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.443 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating ProviderTree inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.444 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.459 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing aggregate associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.481 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing trait associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, traits: COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 04 10:38:40 compute-0 nova_compute[244644]: 2025-12-04 10:38:40.499 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:38:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:38:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3905714256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:38:41 compute-0 nova_compute[244644]: 2025-12-04 10:38:41.064 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:38:41 compute-0 nova_compute[244644]: 2025-12-04 10:38:41.070 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:38:41 compute-0 nova_compute[244644]: 2025-12-04 10:38:41.148 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:38:41 compute-0 nova_compute[244644]: 2025-12-04 10:38:41.150 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:38:41 compute-0 nova_compute[244644]: 2025-12-04 10:38:41.150 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:38:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:41 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3905714256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:38:42 compute-0 nova_compute[244644]: 2025-12-04 10:38:42.145 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:42 compute-0 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:42 compute-0 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:42 compute-0 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:42 compute-0 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:38:42 compute-0 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:38:42 compute-0 ceph-mon[75358]: pgmap v835: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:43 compute-0 ceph-mon[75358]: pgmap v836: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:46 compute-0 ceph-mon[75358]: pgmap v837: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:48 compute-0 ceph-mon[75358]: pgmap v838: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:49 compute-0 sshd-session[249015]: Invalid user guest from 107.175.213.239 port 33670
Dec 04 10:38:49 compute-0 sshd-session[249015]: Received disconnect from 107.175.213.239 port 33670:11: Bye Bye [preauth]
Dec 04 10:38:49 compute-0 sshd-session[249015]: Disconnected from invalid user guest 107.175.213.239 port 33670 [preauth]
Dec 04 10:38:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 04 10:38:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 04 10:38:50 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 04 10:38:50 compute-0 ceph-mon[75358]: pgmap v839: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 04 10:38:51 compute-0 ceph-mon[75358]: osdmap e118: 3 total, 3 up, 3 in
Dec 04 10:38:51 compute-0 ceph-mon[75358]: pgmap v841: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:38:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 04 10:38:51 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 04 10:38:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 04 10:38:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 04 10:38:52 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 04 10:38:52 compute-0 ceph-mon[75358]: osdmap e119: 3 total, 3 up, 3 in
Dec 04 10:38:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 16 MiB data, 152 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 2.7 MiB/s wr, 10 op/s
Dec 04 10:38:53 compute-0 ceph-mon[75358]: osdmap e120: 3 total, 3 up, 3 in
Dec 04 10:38:53 compute-0 ceph-mon[75358]: pgmap v844: 321 pgs: 321 active+clean; 16 MiB data, 152 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 2.7 MiB/s wr, 10 op/s
Dec 04 10:38:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 04 10:38:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 04 10:38:54 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 04 10:38:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:38:54.904 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:38:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:38:54.905 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:38:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:38:54.905 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:38:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 16 MiB data, 152 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 3.3 MiB/s wr, 13 op/s
Dec 04 10:38:55 compute-0 ceph-mon[75358]: osdmap e121: 3 total, 3 up, 3 in
Dec 04 10:38:55 compute-0 ceph-mon[75358]: pgmap v846: 321 pgs: 321 active+clean; 16 MiB data, 152 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 3.3 MiB/s wr, 13 op/s
Dec 04 10:38:55 compute-0 podman[249019]: 2025-12-04 10:38:55.952328957 +0000 UTC m=+0.060429444 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 04 10:38:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.4 MiB/s wr, 40 op/s
Dec 04 10:38:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:38:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:38:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:38:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:38:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:38:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:38:58 compute-0 ceph-mon[75358]: pgmap v847: 321 pgs: 321 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.4 MiB/s wr, 40 op/s
Dec 04 10:38:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:38:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 04 10:38:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 04 10:38:59 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 04 10:38:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.9 MiB/s wr, 55 op/s
Dec 04 10:39:00 compute-0 ceph-mon[75358]: osdmap e122: 3 total, 3 up, 3 in
Dec 04 10:39:00 compute-0 ceph-mon[75358]: pgmap v849: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.9 MiB/s wr, 55 op/s
Dec 04 10:39:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 MiB/s wr, 39 op/s
Dec 04 10:39:02 compute-0 ceph-mon[75358]: pgmap v850: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 MiB/s wr, 39 op/s
Dec 04 10:39:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 MiB/s wr, 35 op/s
Dec 04 10:39:03 compute-0 sudo[249040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:39:03 compute-0 sudo[249040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:03 compute-0 sudo[249040]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:03 compute-0 sudo[249065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:39:03 compute-0 sudo[249065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:04 compute-0 podman[249134]: 2025-12-04 10:39:04.145928324 +0000 UTC m=+0.076293108 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:39:04 compute-0 podman[249134]: 2025-12-04 10:39:04.259543468 +0000 UTC m=+0.189908232 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:39:04 compute-0 ceph-mon[75358]: pgmap v851: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 MiB/s wr, 35 op/s
Dec 04 10:39:05 compute-0 sudo[249065]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:39:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:39:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:05 compute-0 sudo[249322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:39:05 compute-0 sudo[249322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:05 compute-0 sudo[249322]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:05 compute-0 sudo[249347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:39:05 compute-0 sudo[249347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 MiB/s wr, 31 op/s
Dec 04 10:39:05 compute-0 sudo[249347]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:39:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:39:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:39:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:39:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:39:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:39:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:39:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:39:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:39:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:39:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:39:05 compute-0 sudo[249403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:39:05 compute-0 sudo[249403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:05 compute-0 sudo[249403]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:05 compute-0 sudo[249428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:39:05 compute-0 sudo[249428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:06 compute-0 ceph-mon[75358]: pgmap v852: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 MiB/s wr, 31 op/s
Dec 04 10:39:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:39:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:39:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:39:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:39:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:39:06 compute-0 podman[249466]: 2025-12-04 10:39:06.195640571 +0000 UTC m=+0.068903695 container create 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:39:06 compute-0 systemd[1]: Started libpod-conmon-521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8.scope.
Dec 04 10:39:06 compute-0 podman[249466]: 2025-12-04 10:39:06.163714196 +0000 UTC m=+0.036977410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:39:06 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:39:06 compute-0 podman[249466]: 2025-12-04 10:39:06.311878211 +0000 UTC m=+0.185141355 container init 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:39:06 compute-0 podman[249466]: 2025-12-04 10:39:06.325258409 +0000 UTC m=+0.198521523 container start 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 04 10:39:06 compute-0 friendly_leakey[249489]: 167 167
Dec 04 10:39:06 compute-0 systemd[1]: libpod-521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8.scope: Deactivated successfully.
Dec 04 10:39:06 compute-0 podman[249466]: 2025-12-04 10:39:06.333226825 +0000 UTC m=+0.206489939 container attach 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:39:06 compute-0 podman[249466]: 2025-12-04 10:39:06.334434025 +0000 UTC m=+0.207697139 container died 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:39:06 compute-0 podman[249480]: 2025-12-04 10:39:06.343144239 +0000 UTC m=+0.103850865 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:39:06 compute-0 podman[249483]: 2025-12-04 10:39:06.349140996 +0000 UTC m=+0.095913309 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:39:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5adfe84f0bfd228926324591f8c9939ef40d3e3cef7420aff8639807c5ffc0e0-merged.mount: Deactivated successfully.
Dec 04 10:39:06 compute-0 podman[249466]: 2025-12-04 10:39:06.381280607 +0000 UTC m=+0.254543721 container remove 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 04 10:39:06 compute-0 systemd[1]: libpod-conmon-521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8.scope: Deactivated successfully.
Dec 04 10:39:06 compute-0 podman[249552]: 2025-12-04 10:39:06.557466971 +0000 UTC m=+0.045991823 container create 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:39:06 compute-0 systemd[1]: Started libpod-conmon-198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912.scope.
Dec 04 10:39:06 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:06 compute-0 podman[249552]: 2025-12-04 10:39:06.536199708 +0000 UTC m=+0.024724580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:06 compute-0 podman[249552]: 2025-12-04 10:39:06.65132233 +0000 UTC m=+0.139847202 container init 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 10:39:06 compute-0 podman[249552]: 2025-12-04 10:39:06.658359773 +0000 UTC m=+0.146884625 container start 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:39:06 compute-0 podman[249552]: 2025-12-04 10:39:06.663364886 +0000 UTC m=+0.151889738 container attach 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:39:07 compute-0 charming_feynman[249569]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:39:07 compute-0 charming_feynman[249569]: --> All data devices are unavailable
Dec 04 10:39:07 compute-0 systemd[1]: libpod-198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912.scope: Deactivated successfully.
Dec 04 10:39:07 compute-0 podman[249552]: 2025-12-04 10:39:07.161923869 +0000 UTC m=+0.650448721 container died 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626-merged.mount: Deactivated successfully.
Dec 04 10:39:07 compute-0 podman[249552]: 2025-12-04 10:39:07.213670582 +0000 UTC m=+0.702195444 container remove 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:39:07 compute-0 systemd[1]: libpod-conmon-198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912.scope: Deactivated successfully.
Dec 04 10:39:07 compute-0 sudo[249428]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Dec 04 10:39:07 compute-0 sudo[249601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:39:07 compute-0 sudo[249601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:07 compute-0 sudo[249601]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:07 compute-0 sudo[249626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:39:07 compute-0 sudo[249626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:07 compute-0 podman[249664]: 2025-12-04 10:39:07.711472616 +0000 UTC m=+0.052790409 container create 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:39:07 compute-0 systemd[1]: Started libpod-conmon-6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c.scope.
Dec 04 10:39:07 compute-0 podman[249664]: 2025-12-04 10:39:07.688138653 +0000 UTC m=+0.029456496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:39:07 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:39:07 compute-0 podman[249664]: 2025-12-04 10:39:07.807598141 +0000 UTC m=+0.148915984 container init 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:39:07 compute-0 podman[249664]: 2025-12-04 10:39:07.817007503 +0000 UTC m=+0.158325336 container start 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:39:07 compute-0 podman[249664]: 2025-12-04 10:39:07.821693048 +0000 UTC m=+0.163010881 container attach 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 04 10:39:07 compute-0 hungry_nightingale[249680]: 167 167
Dec 04 10:39:07 compute-0 systemd[1]: libpod-6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c.scope: Deactivated successfully.
Dec 04 10:39:07 compute-0 podman[249664]: 2025-12-04 10:39:07.825130643 +0000 UTC m=+0.166448446 container died 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-896993f541bc9533cf7d720d451ab6f6ba7fddfaf643b9b5e7caae6e7ef737ee-merged.mount: Deactivated successfully.
Dec 04 10:39:07 compute-0 podman[249664]: 2025-12-04 10:39:07.875748768 +0000 UTC m=+0.217066561 container remove 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 04 10:39:07 compute-0 systemd[1]: libpod-conmon-6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c.scope: Deactivated successfully.
Dec 04 10:39:08 compute-0 podman[249704]: 2025-12-04 10:39:08.059310543 +0000 UTC m=+0.044345662 container create abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:39:08 compute-0 systemd[1]: Started libpod-conmon-abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866.scope.
Dec 04 10:39:08 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:08 compute-0 podman[249704]: 2025-12-04 10:39:08.041036603 +0000 UTC m=+0.026071732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:39:08 compute-0 podman[249704]: 2025-12-04 10:39:08.146396945 +0000 UTC m=+0.131432084 container init abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 04 10:39:08 compute-0 podman[249704]: 2025-12-04 10:39:08.15430986 +0000 UTC m=+0.139344979 container start abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:39:08 compute-0 podman[249704]: 2025-12-04 10:39:08.158547404 +0000 UTC m=+0.143582543 container attach abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:39:08 compute-0 ceph-mon[75358]: pgmap v853: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Dec 04 10:39:08 compute-0 festive_banzai[249721]: {
Dec 04 10:39:08 compute-0 festive_banzai[249721]:     "0": [
Dec 04 10:39:08 compute-0 festive_banzai[249721]:         {
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "devices": [
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "/dev/loop3"
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             ],
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_name": "ceph_lv0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_size": "21470642176",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "name": "ceph_lv0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "tags": {
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.cluster_name": "ceph",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.crush_device_class": "",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.encrypted": "0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.objectstore": "bluestore",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.osd_id": "0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.type": "block",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.vdo": "0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.with_tpm": "0"
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             },
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "type": "block",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "vg_name": "ceph_vg0"
Dec 04 10:39:08 compute-0 festive_banzai[249721]:         }
Dec 04 10:39:08 compute-0 festive_banzai[249721]:     ],
Dec 04 10:39:08 compute-0 festive_banzai[249721]:     "1": [
Dec 04 10:39:08 compute-0 festive_banzai[249721]:         {
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "devices": [
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "/dev/loop4"
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             ],
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_name": "ceph_lv1",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_size": "21470642176",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "name": "ceph_lv1",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "tags": {
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.cluster_name": "ceph",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.crush_device_class": "",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.encrypted": "0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.objectstore": "bluestore",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.osd_id": "1",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.type": "block",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.vdo": "0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.with_tpm": "0"
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             },
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "type": "block",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "vg_name": "ceph_vg1"
Dec 04 10:39:08 compute-0 festive_banzai[249721]:         }
Dec 04 10:39:08 compute-0 festive_banzai[249721]:     ],
Dec 04 10:39:08 compute-0 festive_banzai[249721]:     "2": [
Dec 04 10:39:08 compute-0 festive_banzai[249721]:         {
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "devices": [
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "/dev/loop5"
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             ],
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_name": "ceph_lv2",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_size": "21470642176",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "name": "ceph_lv2",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "tags": {
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.cluster_name": "ceph",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.crush_device_class": "",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.encrypted": "0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.objectstore": "bluestore",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.osd_id": "2",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.type": "block",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.vdo": "0",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:                 "ceph.with_tpm": "0"
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             },
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "type": "block",
Dec 04 10:39:08 compute-0 festive_banzai[249721]:             "vg_name": "ceph_vg2"
Dec 04 10:39:08 compute-0 festive_banzai[249721]:         }
Dec 04 10:39:08 compute-0 festive_banzai[249721]:     ]
Dec 04 10:39:08 compute-0 festive_banzai[249721]: }
Dec 04 10:39:08 compute-0 systemd[1]: libpod-abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866.scope: Deactivated successfully.
Dec 04 10:39:08 compute-0 podman[249704]: 2025-12-04 10:39:08.496146958 +0000 UTC m=+0.481182087 container died abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596-merged.mount: Deactivated successfully.
Dec 04 10:39:08 compute-0 podman[249704]: 2025-12-04 10:39:08.558421389 +0000 UTC m=+0.543456528 container remove abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 04 10:39:08 compute-0 systemd[1]: libpod-conmon-abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866.scope: Deactivated successfully.
Dec 04 10:39:08 compute-0 sudo[249626]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:08 compute-0 sudo[249741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:39:08 compute-0 sudo[249741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:08 compute-0 sudo[249741]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:08 compute-0 sudo[249766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:39:08 compute-0 sudo[249766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:09 compute-0 podman[249804]: 2025-12-04 10:39:09.097154561 +0000 UTC m=+0.043435900 container create d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 04 10:39:09 compute-0 systemd[1]: Started libpod-conmon-d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4.scope.
Dec 04 10:39:09 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:39:09 compute-0 podman[249804]: 2025-12-04 10:39:09.078861981 +0000 UTC m=+0.025143330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:39:09 compute-0 podman[249804]: 2025-12-04 10:39:09.175662022 +0000 UTC m=+0.121943381 container init d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:39:09 compute-0 podman[249804]: 2025-12-04 10:39:09.186842037 +0000 UTC m=+0.133123366 container start d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:39:09 compute-0 podman[249804]: 2025-12-04 10:39:09.191001059 +0000 UTC m=+0.137282408 container attach d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 04 10:39:09 compute-0 wonderful_bhabha[249821]: 167 167
Dec 04 10:39:09 compute-0 systemd[1]: libpod-d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4.scope: Deactivated successfully.
Dec 04 10:39:09 compute-0 podman[249804]: 2025-12-04 10:39:09.193818488 +0000 UTC m=+0.140099817 container died d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:39:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0be92daa41baaf46fbce1f3fd48b57712c6090a91893f590f8b0b60d8f30ae5d-merged.mount: Deactivated successfully.
Dec 04 10:39:09 compute-0 podman[249804]: 2025-12-04 10:39:09.230783038 +0000 UTC m=+0.177064367 container remove d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:39:09 compute-0 systemd[1]: libpod-conmon-d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4.scope: Deactivated successfully.
Dec 04 10:39:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:39:09 compute-0 podman[249844]: 2025-12-04 10:39:09.412131458 +0000 UTC m=+0.049850097 container create bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:39:09 compute-0 ceph-mon[75358]: pgmap v854: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:39:09 compute-0 systemd[1]: Started libpod-conmon-bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b.scope.
Dec 04 10:39:09 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:39:09 compute-0 podman[249844]: 2025-12-04 10:39:09.391703856 +0000 UTC m=+0.029422545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:39:09 compute-0 podman[249844]: 2025-12-04 10:39:09.555708911 +0000 UTC m=+0.193427570 container init bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:39:09 compute-0 podman[249844]: 2025-12-04 10:39:09.562372634 +0000 UTC m=+0.200091273 container start bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:39:09 compute-0 podman[249844]: 2025-12-04 10:39:09.566370212 +0000 UTC m=+0.204088881 container attach bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:39:10 compute-0 lvm[249939]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:39:10 compute-0 lvm[249942]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:39:10 compute-0 lvm[249942]: VG ceph_vg2 finished
Dec 04 10:39:10 compute-0 lvm[249939]: VG ceph_vg0 finished
Dec 04 10:39:10 compute-0 lvm[249940]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:39:10 compute-0 lvm[249940]: VG ceph_vg1 finished
Dec 04 10:39:10 compute-0 mystifying_curran[249860]: {}
Dec 04 10:39:10 compute-0 systemd[1]: libpod-bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b.scope: Deactivated successfully.
Dec 04 10:39:10 compute-0 podman[249844]: 2025-12-04 10:39:10.48792182 +0000 UTC m=+1.125640459 container died bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:39:10 compute-0 systemd[1]: libpod-bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b.scope: Consumed 1.533s CPU time.
Dec 04 10:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993-merged.mount: Deactivated successfully.
Dec 04 10:39:10 compute-0 podman[249844]: 2025-12-04 10:39:10.536682739 +0000 UTC m=+1.174401368 container remove bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 04 10:39:10 compute-0 systemd[1]: libpod-conmon-bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b.scope: Deactivated successfully.
Dec 04 10:39:10 compute-0 sudo[249766]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:39:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:39:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:10 compute-0 sudo[249958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:39:10 compute-0 sudo[249958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:39:10 compute-0 sudo[249958]: pam_unix(sudo:session): session closed for user root
Dec 04 10:39:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:39:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:39:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674658812' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:39:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:39:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674658812' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:39:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:39:11 compute-0 ceph-mon[75358]: pgmap v855: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:39:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3674658812' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:39:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3674658812' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:39:11 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:39:11.977 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:39:11 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:39:11.980 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:12 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:12.591+0000 7f8423c95640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/dfcd5d86-8b04-4c9e-b7fc-a8b3dfe0eeb4'.
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "format": "json"}]: dispatch
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:12 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/ded1c69d-60a5-4683-b853-47a6a2331bac'.
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp'
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp' to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta'
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "format": "json"}]: dispatch
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:13 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:13 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "format": "json"}]: dispatch
Dec 04 10:39:13 compute-0 ceph-mon[75358]: pgmap v856: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:39:13 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:13 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:13 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.iwufnj(active, since 24m)
Dec 04 10:39:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "format": "json"}]: dispatch
Dec 04 10:39:14 compute-0 ceph-mon[75358]: mgrmap e10: compute-0.iwufnj(active, since 24m)
Dec 04 10:39:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:39:15 compute-0 ceph-mon[75358]: pgmap v857: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:39:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec 04 10:39:16 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013/11e6b02f-a848-4901-a396-9e1375701b90'.
Dec 04 10:39:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013/.meta.tmp'
Dec 04 10:39:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013/.meta.tmp' to config b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013/.meta'
Dec 04 10:39:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec 04 10:39:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "format": "json"}]: dispatch
Dec 04 10:39:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec 04 10:39:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec 04 10:39:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:16 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "snap_name": "aa7c34cc-89fa-4f37-ac23-f8e6d4d78142", "format": "json"}]: dispatch
Dec 04 10:39:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 1 op/s
Dec 04 10:39:17 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:17 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "format": "json"}]: dispatch
Dec 04 10:39:17 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "snap_name": "aa7c34cc-89fa-4f37-ac23-f8e6d4d78142", "format": "json"}]: dispatch
Dec 04 10:39:17 compute-0 ceph-mon[75358]: pgmap v858: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 1 op/s
Dec 04 10:39:18 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:39:18.983 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:39:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s wr, 2 op/s
Dec 04 10:39:20 compute-0 ceph-mon[75358]: pgmap v859: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s wr, 2 op/s
Dec 04 10:39:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 04 10:39:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec 04 10:39:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec 04 10:39:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Dec 04 10:39:21 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 04 10:39:21 compute-0 ceph-mon[75358]: pgmap v860: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "format": "json"}]: dispatch
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a0126db-6550-44ce-a3c1-aa8acaa2b013' of type subvolume
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.123+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a0126db-6550-44ce-a3c1-aa8acaa2b013' of type subvolume
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013'' moved to trashcan
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "snap_name": "aa7c34cc-89fa-4f37-ac23-f8e6d4d78142_cd44680f-beaa-44fb-858d-84098d409d42", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142_cd44680f-beaa-44fb-858d-84098d409d42, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "format": "json"}]: dispatch
Dec 04 10:39:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp'
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp' to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta'
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142_cd44680f-beaa-44fb-858d-84098d409d42, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "snap_name": "aa7c34cc-89fa-4f37-ac23-f8e6d4d78142", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp'
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp' to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta'
Dec 04 10:39:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Dec 04 10:39:23 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.iwufnj(active, since 24m)
Dec 04 10:39:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "snap_name": "aa7c34cc-89fa-4f37-ac23-f8e6d4d78142_cd44680f-beaa-44fb-858d-84098d409d42", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "snap_name": "aa7c34cc-89fa-4f37-ac23-f8e6d4d78142", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:23 compute-0 ceph-mon[75358]: pgmap v861: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Dec 04 10:39:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec 04 10:39:24 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4/f40d18bc-cd97-4fcd-8483-2659863f3efc'.
Dec 04 10:39:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4/.meta.tmp'
Dec 04 10:39:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4/.meta.tmp' to config b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4/.meta'
Dec 04 10:39:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec 04 10:39:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "format": "json"}]: dispatch
Dec 04 10:39:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec 04 10:39:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec 04 10:39:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:24 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:24 compute-0 ceph-mon[75358]: mgrmap e11: compute-0.iwufnj(active, since 24m)
Dec 04 10:39:24 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:24 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Dec 04 10:39:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 04 10:39:25 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "format": "json"}]: dispatch
Dec 04 10:39:25 compute-0 ceph-mon[75358]: pgmap v862: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Dec 04 10:39:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 04 10:39:25 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997/80abc3c2-3b19-4345-a3c6-9ba9356fed24'.
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997/.meta.tmp'
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997/.meta.tmp' to config b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997/.meta'
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "format": "json"}]: dispatch
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec 04 10:39:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec 04 10:39:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "format": "json"}]: dispatch
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e2e2f6bb-d3cb-4e49-9c72-447ac26e9630' of type subvolume
Dec 04 10:39:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:26.182+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e2e2f6bb-d3cb-4e49-9c72-447ac26e9630' of type subvolume
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630'' moved to trashcan
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:39:26
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 04 10:39:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:39:26 compute-0 ceph-mon[75358]: osdmap e123: 3 total, 3 up, 3 in
Dec 04 10:39:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "format": "json"}]: dispatch
Dec 04 10:39:26 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "format": "json"}]: dispatch
Dec 04 10:39:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:26 compute-0 podman[250022]: 2025-12-04 10:39:26.985537725 +0000 UTC m=+0.080116872 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 04 10:39:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 5 op/s
Dec 04 10:39:27 compute-0 ceph-mon[75358]: pgmap v864: 321 pgs: 321 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 5 op/s
Dec 04 10:39:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:39:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:39:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:39:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:39:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:39:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/59d121f4-da85-41d2-a460-c3e50ff205a8'.
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp'
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp' to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta'
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "format": "json"}]: dispatch
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483/614ddf80-49a0-47e1-8a8b-70edadadb393'.
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483/.meta.tmp'
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483/.meta.tmp' to config b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483/.meta'
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "format": "json"}]: dispatch
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec 04 10:39:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec 04 10:39:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:28 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:28 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 20 KiB/s wr, 6 op/s
Dec 04 10:39:29 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 04 10:39:29 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec 04 10:39:29 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec 04 10:39:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "format": "json"}]: dispatch
Dec 04 10:39:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "format": "json"}]: dispatch
Dec 04 10:39:29 compute-0 ceph-mon[75358]: pgmap v865: 321 pgs: 321 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 20 KiB/s wr, 6 op/s
Dec 04 10:39:30 compute-0 sshd-session[249791]: error: kex_exchange_identification: read: Connection timed out
Dec 04 10:39:30 compute-0 sshd-session[249791]: banner exchange: Connection from 60.175.154.230 port 38142: Connection timed out
Dec 04 10:39:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "format": "json"}]: dispatch
Dec 04 10:39:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:30 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6f3ec2c-ea96-4c61-9d0a-ba594fe98997' of type subvolume
Dec 04 10:39:30 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:30.617+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6f3ec2c-ea96-4c61-9d0a-ba594fe98997' of type subvolume
Dec 04 10:39:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec 04 10:39:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997'' moved to trashcan
Dec 04 10:39:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:39:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec 04 10:39:31 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 04 10:39:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 30 KiB/s wr, 8 op/s
Dec 04 10:39:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "snap_name": "44d67cb4-039f-4bcf-973c-10ef9d2a3949", "format": "json"}]: dispatch
Dec 04 10:39:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "format": "json"}]: dispatch
Dec 04 10:39:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:32 compute-0 ceph-mon[75358]: pgmap v866: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 30 KiB/s wr, 8 op/s
Dec 04 10:39:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 30 KiB/s wr, 9 op/s
Dec 04 10:39:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "snap_name": "44d67cb4-039f-4bcf-973c-10ef9d2a3949", "format": "json"}]: dispatch
Dec 04 10:39:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 04 10:39:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "format": "json"}]: dispatch
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:34 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eafbbd68-3ab6-43b4-96ac-e00e60922483' of type subvolume
Dec 04 10:39:34 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:34.053+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eafbbd68-3ab6-43b4-96ac-e00e60922483' of type subvolume
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483'' moved to trashcan
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a/c5dc9c35-ebe8-41e2-8bb1-a722819e148b'.
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a/.meta.tmp'
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a/.meta.tmp' to config b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a/.meta'
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec 04 10:39:34 compute-0 ceph-mon[75358]: pgmap v867: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 30 KiB/s wr, 9 op/s
Dec 04 10:39:34 compute-0 ceph-mon[75358]: osdmap e124: 3 total, 3 up, 3 in
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "format": "json"}]: dispatch
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec 04 10:39:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec 04 10:39:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 538 B/s rd, 32 KiB/s wr, 10 op/s
Dec 04 10:39:35 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "format": "json"}]: dispatch
Dec 04 10:39:35 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:35 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:35 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "format": "json"}]: dispatch
Dec 04 10:39:36 compute-0 ceph-mon[75358]: pgmap v869: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 538 B/s rd, 32 KiB/s wr, 10 op/s
Dec 04 10:39:36 compute-0 podman[250044]: 2025-12-04 10:39:36.955927494 +0000 UTC m=+0.058210932 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 04 10:39:37 compute-0 podman[250043]: 2025-12-04 10:39:37.007194415 +0000 UTC m=+0.107600208 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669683444621841 of space, bias 1.0, pg target 0.20009050333865525 quantized to 32 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.7942696887005924e-06 of space, bias 4.0, pg target 0.008153123626440712 quantized to 16 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:39:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 26 KiB/s wr, 8 op/s
Dec 04 10:39:37 compute-0 ceph-mon[75358]: pgmap v870: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 26 KiB/s wr, 8 op/s
Dec 04 10:39:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "format": "json"}]: dispatch
Dec 04 10:39:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:38 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2dfe7d61-dd18-4df1-ba8a-2c28cc36210a' of type subvolume
Dec 04 10:39:38 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:38.194+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2dfe7d61-dd18-4df1-ba8a-2c28cc36210a' of type subvolume
Dec 04 10:39:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec 04 10:39:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a'' moved to trashcan
Dec 04 10:39:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:39:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec 04 10:39:38 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "format": "json"}]: dispatch
Dec 04 10:39:38 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 26 KiB/s wr, 8 op/s
Dec 04 10:39:39 compute-0 nova_compute[244644]: 2025-12-04 10:39:39.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:39 compute-0 nova_compute[244644]: 2025-12-04 10:39:39.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:39:39 compute-0 nova_compute[244644]: 2025-12-04 10:39:39.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:39:39 compute-0 nova_compute[244644]: 2025-12-04 10:39:39.359 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:39:39 compute-0 ceph-mon[75358]: pgmap v871: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 26 KiB/s wr, 8 op/s
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.357 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.358 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.388 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.388 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.389 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.389 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.389 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:39:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:39:40 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455058553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:39:40 compute-0 nova_compute[244644]: 2025-12-04 10:39:40.961 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:39:40 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2455058553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.125 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.126 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.126 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.127 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.196 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.197 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.213 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6/9a8f2f78-2832-4ae7-987b-f210b3ecae09'.
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6/.meta.tmp'
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6/.meta.tmp' to config b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6/.meta'
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "format": "json"}]: dispatch
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec 04 10:39:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec 04 10:39:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:39:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4101248298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.812 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.817 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.833 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.835 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:39:41 compute-0 nova_compute[244644]: 2025-12-04 10:39:41.835 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:39:42 compute-0 ceph-mon[75358]: pgmap v872: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Dec 04 10:39:42 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:42 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4101248298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:39:42 compute-0 nova_compute[244644]: 2025-12-04 10:39:42.816 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:42 compute-0 nova_compute[244644]: 2025-12-04 10:39:42.817 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:42 compute-0 nova_compute[244644]: 2025-12-04 10:39:42.817 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:42 compute-0 nova_compute[244644]: 2025-12-04 10:39:42.817 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:43 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:43 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "format": "json"}]: dispatch
Dec 04 10:39:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Dec 04 10:39:43 compute-0 nova_compute[244644]: 2025-12-04 10:39:43.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:39:43 compute-0 nova_compute[244644]: 2025-12-04 10:39:43.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:39:44 compute-0 ceph-mon[75358]: pgmap v873: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Dec 04 10:39:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:45 compute-0 sshd-session[250134]: Invalid user guest from 74.249.218.27 port 60302
Dec 04 10:39:45 compute-0 sshd-session[250134]: Received disconnect from 74.249.218.27 port 60302:11: Bye Bye [preauth]
Dec 04 10:39:45 compute-0 sshd-session[250134]: Disconnected from invalid user guest 74.249.218.27 port 60302 [preauth]
Dec 04 10:39:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 15 KiB/s wr, 5 op/s
Dec 04 10:39:46 compute-0 ceph-mon[75358]: pgmap v874: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 15 KiB/s wr, 5 op/s
Dec 04 10:39:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "format": "json"}]: dispatch
Dec 04 10:39:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:46 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '082afd37-c266-4ac4-8cb3-b2d98a4b42b6' of type subvolume
Dec 04 10:39:46 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:46.879+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '082afd37-c266-4ac4-8cb3-b2d98a4b42b6' of type subvolume
Dec 04 10:39:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec 04 10:39:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6'' moved to trashcan
Dec 04 10:39:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:39:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec 04 10:39:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 6 op/s
Dec 04 10:39:47 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "format": "json"}]: dispatch
Dec 04 10:39:47 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:47 compute-0 ceph-mon[75358]: pgmap v875: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 6 op/s
Dec 04 10:39:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 10 KiB/s wr, 5 op/s
Dec 04 10:39:50 compute-0 ceph-mon[75358]: pgmap v876: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 10 KiB/s wr, 5 op/s
Dec 04 10:39:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "706dbf68-b212-4a2b-9b03-317bdcefb564", "format": "json"}]: dispatch
Dec 04 10:39:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 4 op/s
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.390611) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791390643, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2130, "num_deletes": 251, "total_data_size": 3651521, "memory_usage": 3725792, "flush_reason": "Manual Compaction"}
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 04 10:39:51 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "706dbf68-b212-4a2b-9b03-317bdcefb564", "format": "json"}]: dispatch
Dec 04 10:39:51 compute-0 ceph-mon[75358]: pgmap v877: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 4 op/s
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "snap_name": "44d67cb4-039f-4bcf-973c-10ef9d2a3949_38bff781-e943-4d39-a749-423f72e5abda", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949_38bff781-e943-4d39-a749-423f72e5abda, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791414625, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3573459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16384, "largest_seqno": 18513, "table_properties": {"data_size": 3563758, "index_size": 6131, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20132, "raw_average_key_size": 20, "raw_value_size": 3543996, "raw_average_value_size": 3565, "num_data_blocks": 276, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844580, "oldest_key_time": 1764844580, "file_creation_time": 1764844791, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 24383 microseconds, and 9387 cpu microseconds.
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.414969) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3573459 bytes OK
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.415067) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.417323) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.417402) EVENT_LOG_v1 {"time_micros": 1764844791417390, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.417456) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3642468, prev total WAL file size 3642468, number of live WAL files 2.
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.419083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3489KB)], [38(7735KB)]
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791419228, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11494113, "oldest_snapshot_seqno": -1}
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp'
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp' to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta'
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949_38bff781-e943-4d39-a749-423f72e5abda, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "snap_name": "44d67cb4-039f-4bcf-973c-10ef9d2a3949", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp'
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp' to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta'
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4529 keys, 9700592 bytes, temperature: kUnknown
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791493606, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9700592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9666817, "index_size": 21377, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 109679, "raw_average_key_size": 24, "raw_value_size": 9581511, "raw_average_value_size": 2115, "num_data_blocks": 907, "num_entries": 4529, "num_filter_entries": 4529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844791, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.494165) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9700592 bytes
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.496183) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.0 rd, 130.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5050, records dropped: 521 output_compression: NoCompression
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.496217) EVENT_LOG_v1 {"time_micros": 1764844791496199, "job": 18, "event": "compaction_finished", "compaction_time_micros": 74628, "compaction_time_cpu_micros": 22394, "output_level": 6, "num_output_files": 1, "total_output_size": 9700592, "num_input_records": 5050, "num_output_records": 4529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791497963, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791501300, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.418967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:39:51 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "706dbf68-b212-4a2b-9b03-317bdcefb564_3681d897-123a-4773-ade2-a9eef0b417b5", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564_3681d897-123a-4773-ade2-a9eef0b417b5, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564_3681d897-123a-4773-ade2-a9eef0b417b5, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "706dbf68-b212-4a2b-9b03-317bdcefb564", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:39:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:39:52 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "snap_name": "44d67cb4-039f-4bcf-973c-10ef9d2a3949_38bff781-e943-4d39-a749-423f72e5abda", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:52 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "snap_name": "44d67cb4-039f-4bcf-973c-10ef9d2a3949", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:52 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "706dbf68-b212-4a2b-9b03-317bdcefb564_3681d897-123a-4773-ade2-a9eef0b417b5", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:52 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "706dbf68-b212-4a2b-9b03-317bdcefb564", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:53 compute-0 sshd-session[250136]: Invalid user cgpexpert from 103.149.86.230 port 59268
Dec 04 10:39:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 9.2 KiB/s wr, 4 op/s
Dec 04 10:39:53 compute-0 ceph-mon[75358]: pgmap v878: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 9.2 KiB/s wr, 4 op/s
Dec 04 10:39:53 compute-0 sshd-session[250136]: Received disconnect from 103.149.86.230 port 59268:11: Bye Bye [preauth]
Dec 04 10:39:53 compute-0 sshd-session[250136]: Disconnected from invalid user cgpexpert 103.149.86.230 port 59268 [preauth]
Dec 04 10:39:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec 04 10:39:54 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929/69a6afba-83cb-47e6-956d-f0583049d7f7'.
Dec 04 10:39:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929/.meta.tmp'
Dec 04 10:39:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929/.meta.tmp' to config b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929/.meta'
Dec 04 10:39:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec 04 10:39:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "format": "json"}]: dispatch
Dec 04 10:39:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec 04 10:39:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec 04 10:39:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:39:54.905 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:39:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:39:54.905 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:39:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:39:54.906 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "format": "json"}]: dispatch
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c01d539d-f169-44cc-bc00-f705cd397a14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c01d539d-f169-44cc-bc00-f705cd397a14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c01d539d-f169-44cc-bc00-f705cd397a14' of type subvolume
Dec 04 10:39:55 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:55.022+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c01d539d-f169-44cc-bc00-f705cd397a14' of type subvolume
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14'' moved to trashcan
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec 04 10:39:55 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 04 10:39:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.7 KiB/s wr, 3 op/s
Dec 04 10:39:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "format": "json"}]: dispatch
Dec 04 10:39:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "format": "json"}]: dispatch
Dec 04 10:39:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:55 compute-0 ceph-mon[75358]: pgmap v879: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.7 KiB/s wr, 3 op/s
Dec 04 10:39:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 04 10:39:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 04 10:39:56 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 04 10:39:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 5 op/s
Dec 04 10:39:57 compute-0 ceph-mon[75358]: osdmap e125: 3 total, 3 up, 3 in
Dec 04 10:39:57 compute-0 ceph-mon[75358]: pgmap v881: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 5 op/s
Dec 04 10:39:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:39:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:39:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:39:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:39:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:39:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:39:57 compute-0 podman[250140]: 2025-12-04 10:39:57.97787551 +0000 UTC m=+0.087609545 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7/9fb477bd-f6d6-4a93-81fa-6fa31c946d8f'.
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7/.meta.tmp'
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7/.meta.tmp' to config b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7/.meta'
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "format": "json"}]: dispatch
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec 04 10:39:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:39:58 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:58 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:39:58 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "format": "json"}]: dispatch
Dec 04 10:39:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "format": "json"}]: dispatch
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b67a5b53-5bfd-4560-8728-c671b5b695c4' of type subvolume
Dec 04 10:39:58 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:58.734+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b67a5b53-5bfd-4560-8728-c671b5b695c4' of type subvolume
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4'' moved to trashcan
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:39:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec 04 10:39:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:39:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 27 KiB/s wr, 7 op/s
Dec 04 10:39:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "format": "json"}]: dispatch
Dec 04 10:39:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "force": true, "format": "json"}]: dispatch
Dec 04 10:39:59 compute-0 ceph-mon[75358]: pgmap v882: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 27 KiB/s wr, 7 op/s
Dec 04 10:39:59 compute-0 sshd-session[250161]: Invalid user admin1234 from 217.154.62.22 port 41662
Dec 04 10:39:59 compute-0 sshd-session[250161]: Received disconnect from 217.154.62.22 port 41662:11: Bye Bye [preauth]
Dec 04 10:39:59 compute-0 sshd-session[250161]: Disconnected from invalid user admin1234 217.154.62.22 port 41662 [preauth]
Dec 04 10:40:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "82370328-067d-4dd3-9bef-3f2224bb43b9", "format": "json"}]: dispatch
Dec 04 10:40:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:00 compute-0 sshd-session[250138]: Connection closed by 101.47.163.20 port 47378 [preauth]
Dec 04 10:40:00 compute-0 sshd-session[250163]: Invalid user bot from 103.179.218.243 port 42874
Dec 04 10:40:01 compute-0 sshd-session[250163]: Received disconnect from 103.179.218.243 port 42874:11: Bye Bye [preauth]
Dec 04 10:40:01 compute-0 sshd-session[250163]: Disconnected from invalid user bot 103.179.218.243 port 42874 [preauth]
Dec 04 10:40:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 35 KiB/s wr, 8 op/s
Dec 04 10:40:02 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "82370328-067d-4dd3-9bef-3f2224bb43b9", "format": "json"}]: dispatch
Dec 04 10:40:02 compute-0 ceph-mon[75358]: pgmap v883: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 35 KiB/s wr, 8 op/s
Dec 04 10:40:02 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "format": "json"}]: dispatch
Dec 04 10:40:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c6122866-729b-4644-a1c7-d8745b4ab929, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c6122866-729b-4644-a1c7-d8745b4ab929, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:02 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c6122866-729b-4644-a1c7-d8745b4ab929' of type subvolume
Dec 04 10:40:02 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:02.359+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c6122866-729b-4644-a1c7-d8745b4ab929' of type subvolume
Dec 04 10:40:02 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec 04 10:40:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929'' moved to trashcan
Dec 04 10:40:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec 04 10:40:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "format": "json"}]: dispatch
Dec 04 10:40:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 34 KiB/s wr, 8 op/s
Dec 04 10:40:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:40:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 04 10:40:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 04 10:40:04 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 04 10:40:04 compute-0 ceph-mon[75358]: pgmap v884: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 34 KiB/s wr, 8 op/s
Dec 04 10:40:04 compute-0 ceph-mon[75358]: osdmap e126: 3 total, 3 up, 3 in
Dec 04 10:40:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "76e0aa1d-e6e6-4ec3-a58c-79587b9868cb", "format": "json"}]: dispatch
Dec 04 10:40:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 920 B/s rd, 17 KiB/s wr, 6 op/s
Dec 04 10:40:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "76e0aa1d-e6e6-4ec3-a58c-79587b9868cb", "format": "json"}]: dispatch
Dec 04 10:40:05 compute-0 ceph-mon[75358]: pgmap v886: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 920 B/s rd, 17 KiB/s wr, 6 op/s
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "format": "json"}]: dispatch
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd99e196-9855-42c0-b3ab-7d9a58ace6f7' of type subvolume
Dec 04 10:40:06 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:06.059+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd99e196-9855-42c0-b3ab-7d9a58ace6f7' of type subvolume
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7'' moved to trashcan
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec 04 10:40:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "format": "json"}]: dispatch
Dec 04 10:40:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e/2e820380-b271-4f3e-8b24-f787b9d60a68'.
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e/.meta.tmp'
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e/.meta.tmp' to config b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e/.meta'
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "format": "json"}]: dispatch
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec 04 10:40:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec 04 10:40:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:40:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 24 KiB/s wr, 7 op/s
Dec 04 10:40:07 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:07 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "format": "json"}]: dispatch
Dec 04 10:40:07 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:07 compute-0 ceph-mon[75358]: pgmap v887: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 24 KiB/s wr, 7 op/s
Dec 04 10:40:07 compute-0 podman[250166]: 2025-12-04 10:40:07.949396867 +0000 UTC m=+0.050523235 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:40:07 compute-0 podman[250165]: 2025-12-04 10:40:07.970607458 +0000 UTC m=+0.082857709 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 04 10:40:09 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "2080cf6d-717b-4750-a2b8-d93db758ab96", "format": "json"}]: dispatch
Dec 04 10:40:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.054648) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809054701, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 447, "num_deletes": 251, "total_data_size": 306517, "memory_usage": 315696, "flush_reason": "Manual Compaction"}
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809058816, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 276638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18514, "largest_seqno": 18960, "table_properties": {"data_size": 274024, "index_size": 650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6967, "raw_average_key_size": 20, "raw_value_size": 268642, "raw_average_value_size": 776, "num_data_blocks": 29, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844792, "oldest_key_time": 1764844792, "file_creation_time": 1764844809, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4190 microseconds, and 1667 cpu microseconds.
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.058847) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 276638 bytes OK
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.058861) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.061855) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.061868) EVENT_LOG_v1 {"time_micros": 1764844809061864, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.061884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 303737, prev total WAL file size 303737, number of live WAL files 2.
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.062225) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(270KB)], [41(9473KB)]
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809062244, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9977230, "oldest_snapshot_seqno": -1}
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4363 keys, 6670725 bytes, temperature: kUnknown
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809101019, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6670725, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6642365, "index_size": 16347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 106773, "raw_average_key_size": 24, "raw_value_size": 6564249, "raw_average_value_size": 1504, "num_data_blocks": 687, "num_entries": 4363, "num_filter_entries": 4363, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844809, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.101348) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6670725 bytes
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.103007) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 256.3 rd, 171.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.3 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(60.2) write-amplify(24.1) OK, records in: 4875, records dropped: 512 output_compression: NoCompression
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.103024) EVENT_LOG_v1 {"time_micros": 1764844809103015, "job": 20, "event": "compaction_finished", "compaction_time_micros": 38930, "compaction_time_cpu_micros": 17444, "output_level": 6, "num_output_files": 1, "total_output_size": 6670725, "num_input_records": 4875, "num_output_records": 4363, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809103204, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809104896, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.062181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:09 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 6 op/s
Dec 04 10:40:10 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "2080cf6d-717b-4750-a2b8-d93db758ab96", "format": "json"}]: dispatch
Dec 04 10:40:10 compute-0 ceph-mon[75358]: pgmap v888: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 6 op/s
Dec 04 10:40:10 compute-0 sudo[250211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:40:10 compute-0 sudo[250211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:40:10 compute-0 sudo[250211]: pam_unix(sudo:session): session closed for user root
Dec 04 10:40:10 compute-0 sudo[250236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:40:10 compute-0 sudo[250236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:40:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "format": "json"}]: dispatch
Dec 04 10:40:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:10 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b863b6ff-799e-4ddb-80e5-dee26b0df34e' of type subvolume
Dec 04 10:40:10 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:10.917+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b863b6ff-799e-4ddb-80e5-dee26b0df34e' of type subvolume
Dec 04 10:40:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec 04 10:40:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e'' moved to trashcan
Dec 04 10:40:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec 04 10:40:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 29 KiB/s wr, 5 op/s
Dec 04 10:40:11 compute-0 sudo[250236]: pam_unix(sudo:session): session closed for user root
Dec 04 10:40:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:40:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:40:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:40:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4122217808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:40:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4122217808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:40:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:40:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:40:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:40:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:40:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:40:11 compute-0 sudo[250292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:40:11 compute-0 sudo[250292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:40:11 compute-0 sudo[250292]: pam_unix(sudo:session): session closed for user root
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "format": "json"}]: dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: pgmap v889: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 29 KiB/s wr, 5 op/s
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/4122217808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/4122217808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:40:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:40:11 compute-0 sudo[250317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:40:11 compute-0 sudo[250317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:40:11 compute-0 podman[250354]: 2025-12-04 10:40:11.973622353 +0000 UTC m=+0.118964478 container create 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 04 10:40:11 compute-0 podman[250354]: 2025-12-04 10:40:11.88612333 +0000 UTC m=+0.031465555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:40:12 compute-0 systemd[1]: Started libpod-conmon-319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb.scope.
Dec 04 10:40:12 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:40:12 compute-0 podman[250354]: 2025-12-04 10:40:12.09753562 +0000 UTC m=+0.242877775 container init 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:40:12 compute-0 podman[250354]: 2025-12-04 10:40:12.106254395 +0000 UTC m=+0.251596530 container start 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:40:12 compute-0 podman[250354]: 2025-12-04 10:40:12.109639958 +0000 UTC m=+0.254982123 container attach 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:40:12 compute-0 compassionate_rosalind[250370]: 167 167
Dec 04 10:40:12 compute-0 systemd[1]: libpod-319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb.scope: Deactivated successfully.
Dec 04 10:40:12 compute-0 podman[250354]: 2025-12-04 10:40:12.11379171 +0000 UTC m=+0.259133855 container died 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae79790d6cb72c79b32d067011bdbe737915b230bb6659a8ba0ef9efcab60696-merged.mount: Deactivated successfully.
Dec 04 10:40:12 compute-0 podman[250354]: 2025-12-04 10:40:12.162514268 +0000 UTC m=+0.307856403 container remove 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:40:12 compute-0 systemd[1]: libpod-conmon-319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb.scope: Deactivated successfully.
Dec 04 10:40:12 compute-0 podman[250394]: 2025-12-04 10:40:12.327421715 +0000 UTC m=+0.044656970 container create 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:40:12 compute-0 systemd[1]: Started libpod-conmon-844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704.scope.
Dec 04 10:40:12 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:12 compute-0 podman[250394]: 2025-12-04 10:40:12.308566082 +0000 UTC m=+0.025801337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:12 compute-0 podman[250394]: 2025-12-04 10:40:12.413961844 +0000 UTC m=+0.131197089 container init 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 04 10:40:12 compute-0 podman[250394]: 2025-12-04 10:40:12.424370739 +0000 UTC m=+0.141605974 container start 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:40:12 compute-0 podman[250394]: 2025-12-04 10:40:12.428173983 +0000 UTC m=+0.145409218 container attach 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 04 10:40:12 compute-0 mystifying_dijkstra[250411]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:40:12 compute-0 mystifying_dijkstra[250411]: --> All data devices are unavailable
Dec 04 10:40:12 compute-0 systemd[1]: libpod-844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704.scope: Deactivated successfully.
Dec 04 10:40:12 compute-0 podman[250394]: 2025-12-04 10:40:12.903050834 +0000 UTC m=+0.620286099 container died 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf-merged.mount: Deactivated successfully.
Dec 04 10:40:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:12 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/185c6afc-9ae6-4332-b81a-975debb7627f'.
Dec 04 10:40:12 compute-0 podman[250394]: 2025-12-04 10:40:12.965300556 +0000 UTC m=+0.682535791 container remove 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:40:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec 04 10:40:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec 04 10:40:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "format": "json"}]: dispatch
Dec 04 10:40:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:40:12 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:12 compute-0 systemd[1]: libpod-conmon-844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704.scope: Deactivated successfully.
Dec 04 10:40:13 compute-0 sudo[250317]: pam_unix(sudo:session): session closed for user root
Dec 04 10:40:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "994f41bf-ed68-4080-9c5f-d4c5df7f4273", "format": "json"}]: dispatch
Dec 04 10:40:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:13 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:13 compute-0 sudo[250444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:40:13 compute-0 sudo[250444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:40:13 compute-0 sudo[250444]: pam_unix(sudo:session): session closed for user root
Dec 04 10:40:13 compute-0 sudo[250469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:40:13 compute-0 sudo[250469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:40:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 29 KiB/s wr, 5 op/s
Dec 04 10:40:13 compute-0 podman[250506]: 2025-12-04 10:40:13.460792242 +0000 UTC m=+0.040962538 container create 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:40:13 compute-0 systemd[1]: Started libpod-conmon-481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb.scope.
Dec 04 10:40:13 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:40:13 compute-0 podman[250506]: 2025-12-04 10:40:13.444086552 +0000 UTC m=+0.024256868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:40:13 compute-0 podman[250506]: 2025-12-04 10:40:13.542971194 +0000 UTC m=+0.123141570 container init 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:40:13 compute-0 podman[250506]: 2025-12-04 10:40:13.552082599 +0000 UTC m=+0.132252895 container start 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:40:13 compute-0 podman[250506]: 2025-12-04 10:40:13.555768689 +0000 UTC m=+0.135939005 container attach 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:40:13 compute-0 eloquent_bouman[250523]: 167 167
Dec 04 10:40:13 compute-0 systemd[1]: libpod-481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb.scope: Deactivated successfully.
Dec 04 10:40:13 compute-0 podman[250506]: 2025-12-04 10:40:13.560068214 +0000 UTC m=+0.140238520 container died 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e35f82feb258702b10241eee00bf7eabafef7b975c4149d1534db3997865f0f4-merged.mount: Deactivated successfully.
Dec 04 10:40:13 compute-0 podman[250506]: 2025-12-04 10:40:13.596369647 +0000 UTC m=+0.176539933 container remove 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 04 10:40:13 compute-0 systemd[1]: libpod-conmon-481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb.scope: Deactivated successfully.
Dec 04 10:40:13 compute-0 podman[250547]: 2025-12-04 10:40:13.788197686 +0000 UTC m=+0.049847257 container create 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 04 10:40:13 compute-0 systemd[1]: Started libpod-conmon-84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a.scope.
Dec 04 10:40:13 compute-0 podman[250547]: 2025-12-04 10:40:13.761255533 +0000 UTC m=+0.022905134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:40:13 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:14 compute-0 podman[250547]: 2025-12-04 10:40:14.24979825 +0000 UTC m=+0.511447861 container init 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:40:14 compute-0 podman[250547]: 2025-12-04 10:40:14.257281584 +0000 UTC m=+0.518931155 container start 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:40:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:40:14 compute-0 podman[250547]: 2025-12-04 10:40:14.271947905 +0000 UTC m=+0.533597506 container attach 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:40:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "format": "json"}]: dispatch
Dec 04 10:40:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "994f41bf-ed68-4080-9c5f-d4c5df7f4273", "format": "json"}]: dispatch
Dec 04 10:40:14 compute-0 ceph-mon[75358]: pgmap v890: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 29 KiB/s wr, 5 op/s
Dec 04 10:40:14 compute-0 angry_franklin[250564]: {
Dec 04 10:40:14 compute-0 angry_franklin[250564]:     "0": [
Dec 04 10:40:14 compute-0 angry_franklin[250564]:         {
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "devices": [
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "/dev/loop3"
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             ],
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_name": "ceph_lv0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_size": "21470642176",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "name": "ceph_lv0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "tags": {
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.cluster_name": "ceph",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.crush_device_class": "",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.encrypted": "0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.objectstore": "bluestore",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.osd_id": "0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.type": "block",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.vdo": "0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.with_tpm": "0"
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             },
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "type": "block",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "vg_name": "ceph_vg0"
Dec 04 10:40:14 compute-0 angry_franklin[250564]:         }
Dec 04 10:40:14 compute-0 angry_franklin[250564]:     ],
Dec 04 10:40:14 compute-0 angry_franklin[250564]:     "1": [
Dec 04 10:40:14 compute-0 angry_franklin[250564]:         {
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "devices": [
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "/dev/loop4"
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             ],
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_name": "ceph_lv1",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_size": "21470642176",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "name": "ceph_lv1",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "tags": {
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.cluster_name": "ceph",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.crush_device_class": "",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.encrypted": "0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.objectstore": "bluestore",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.osd_id": "1",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.type": "block",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.vdo": "0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.with_tpm": "0"
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             },
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "type": "block",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "vg_name": "ceph_vg1"
Dec 04 10:40:14 compute-0 angry_franklin[250564]:         }
Dec 04 10:40:14 compute-0 angry_franklin[250564]:     ],
Dec 04 10:40:14 compute-0 angry_franklin[250564]:     "2": [
Dec 04 10:40:14 compute-0 angry_franklin[250564]:         {
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "devices": [
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "/dev/loop5"
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             ],
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_name": "ceph_lv2",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_size": "21470642176",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "name": "ceph_lv2",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "tags": {
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.cluster_name": "ceph",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.crush_device_class": "",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.encrypted": "0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.objectstore": "bluestore",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.osd_id": "2",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.type": "block",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.vdo": "0",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:                 "ceph.with_tpm": "0"
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             },
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "type": "block",
Dec 04 10:40:14 compute-0 angry_franklin[250564]:             "vg_name": "ceph_vg2"
Dec 04 10:40:14 compute-0 angry_franklin[250564]:         }
Dec 04 10:40:14 compute-0 angry_franklin[250564]:     ]
Dec 04 10:40:14 compute-0 angry_franklin[250564]: }
Dec 04 10:40:14 compute-0 systemd[1]: libpod-84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a.scope: Deactivated successfully.
Dec 04 10:40:14 compute-0 podman[250547]: 2025-12-04 10:40:14.543552786 +0000 UTC m=+0.805202347 container died 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 04 10:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69-merged.mount: Deactivated successfully.
Dec 04 10:40:14 compute-0 podman[250547]: 2025-12-04 10:40:14.586994025 +0000 UTC m=+0.848643586 container remove 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:40:14 compute-0 systemd[1]: libpod-conmon-84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a.scope: Deactivated successfully.
Dec 04 10:40:14 compute-0 sudo[250469]: pam_unix(sudo:session): session closed for user root
Dec 04 10:40:14 compute-0 sudo[250586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:40:14 compute-0 sudo[250586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:40:14 compute-0 sudo[250586]: pam_unix(sudo:session): session closed for user root
Dec 04 10:40:14 compute-0 sudo[250611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:40:14 compute-0 sudo[250611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:40:15 compute-0 podman[250648]: 2025-12-04 10:40:15.068430907 +0000 UTC m=+0.045263145 container create 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:40:15 compute-0 systemd[1]: Started libpod-conmon-5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f.scope.
Dec 04 10:40:15 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:40:15 compute-0 podman[250648]: 2025-12-04 10:40:15.048929637 +0000 UTC m=+0.025761875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:40:15 compute-0 podman[250648]: 2025-12-04 10:40:15.14662133 +0000 UTC m=+0.123453558 container init 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:40:15 compute-0 podman[250648]: 2025-12-04 10:40:15.152650178 +0000 UTC m=+0.129482386 container start 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:40:15 compute-0 podman[250648]: 2025-12-04 10:40:15.156044151 +0000 UTC m=+0.132876389 container attach 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:40:15 compute-0 romantic_tharp[250663]: 167 167
Dec 04 10:40:15 compute-0 systemd[1]: libpod-5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f.scope: Deactivated successfully.
Dec 04 10:40:15 compute-0 podman[250648]: 2025-12-04 10:40:15.159200069 +0000 UTC m=+0.136032287 container died 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:40:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-63f735ef0aed61b70df94dff91d37327e2e019dacf321bd813dbf8c7dc6ec013-merged.mount: Deactivated successfully.
Dec 04 10:40:15 compute-0 podman[250648]: 2025-12-04 10:40:15.196256901 +0000 UTC m=+0.173089119 container remove 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:40:15 compute-0 systemd[1]: libpod-conmon-5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f.scope: Deactivated successfully.
Dec 04 10:40:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 363 B/s rd, 25 KiB/s wr, 5 op/s
Dec 04 10:40:15 compute-0 podman[250685]: 2025-12-04 10:40:15.346697901 +0000 UTC m=+0.039192475 container create 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:40:15 compute-0 systemd[1]: Started libpod-conmon-7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b.scope.
Dec 04 10:40:15 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:40:15 compute-0 podman[250685]: 2025-12-04 10:40:15.422822644 +0000 UTC m=+0.115317228 container init 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:40:15 compute-0 podman[250685]: 2025-12-04 10:40:15.329000226 +0000 UTC m=+0.021494830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:40:15 compute-0 podman[250685]: 2025-12-04 10:40:15.433193559 +0000 UTC m=+0.125688133 container start 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:40:15 compute-0 podman[250685]: 2025-12-04 10:40:15.436971192 +0000 UTC m=+0.129465966 container attach 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:40:15 compute-0 ceph-mon[75358]: pgmap v891: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 363 B/s rd, 25 KiB/s wr, 5 op/s
Dec 04 10:40:15 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:40:15.635 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:40:15 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:40:15.638 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:40:16 compute-0 lvm[250777]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:40:16 compute-0 lvm[250777]: VG ceph_vg0 finished
Dec 04 10:40:16 compute-0 lvm[250780]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:40:16 compute-0 lvm[250780]: VG ceph_vg1 finished
Dec 04 10:40:16 compute-0 lvm[250782]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:40:16 compute-0 lvm[250782]: VG ceph_vg2 finished
Dec 04 10:40:16 compute-0 magical_blackwell[250701]: {}
Dec 04 10:40:16 compute-0 systemd[1]: libpod-7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b.scope: Deactivated successfully.
Dec 04 10:40:16 compute-0 systemd[1]: libpod-7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b.scope: Consumed 1.357s CPU time.
Dec 04 10:40:16 compute-0 podman[250685]: 2025-12-04 10:40:16.315362768 +0000 UTC m=+1.007857332 container died 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2-merged.mount: Deactivated successfully.
Dec 04 10:40:16 compute-0 podman[250685]: 2025-12-04 10:40:16.367019068 +0000 UTC m=+1.059513642 container remove 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:40:16 compute-0 systemd[1]: libpod-conmon-7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b.scope: Deactivated successfully.
Dec 04 10:40:16 compute-0 sudo[250611]: pam_unix(sudo:session): session closed for user root
Dec 04 10:40:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:40:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:40:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:40:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:40:16 compute-0 sudo[250796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:40:16 compute-0 sudo[250796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:40:16 compute-0 sudo[250796]: pam_unix(sudo:session): session closed for user root
Dec 04 10:40:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 5 op/s
Dec 04 10:40:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:40:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:40:17 compute-0 ceph-mon[75358]: pgmap v892: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 5 op/s
Dec 04 10:40:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe", "format": "json"}]: dispatch
Dec 04 10:40:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "7c725858-4362-45de-9321-14ab6b5f8ef0", "format": "json"}]: dispatch
Dec 04 10:40:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:18 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe", "format": "json"}]: dispatch
Dec 04 10:40:18 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "7c725858-4362-45de-9321-14ab6b5f8ef0", "format": "json"}]: dispatch
Dec 04 10:40:18 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:40:18.639 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:40:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:40:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 25 KiB/s wr, 5 op/s
Dec 04 10:40:19 compute-0 ceph-mon[75358]: pgmap v893: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 25 KiB/s wr, 5 op/s
Dec 04 10:40:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:20 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/11ab6c18-79c1-476e-b2f1-acdd14fba99c'.
Dec 04 10:40:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec 04 10:40:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec 04 10:40:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "format": "json"}]: dispatch
Dec 04 10:40:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:40:20 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:20 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:20 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 34 KiB/s wr, 5 op/s
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe", "target_sub_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, target_sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/3d262762-681b-471d-848e-05e9faf04c07'.
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp'
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp' to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta'
Dec 04 10:40:21 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "format": "json"}]: dispatch
Dec 04 10:40:21 compute-0 ceph-mon[75358]: pgmap v894: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 34 KiB/s wr, 5 op/s
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] tracking-id f1f06a15-8b3b-472e-a302-1e70e5eecda7 for path b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9'
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, target_sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 7cfdcab3-2a40-4b85-9afc-15385e3510f9)
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 7cfdcab3-2a40-4b85-9afc-15385e3510f9) -- by 0 seconds
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp'
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp' to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta'
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "7c725858-4362-45de-9321-14ab6b5f8ef0_c4d58189-d550-43b2-accd-301d015ec2f8", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0_c4d58189-d550-43b2-accd-301d015ec2f8, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe", "target_sub_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "7c725858-4362-45de-9321-14ab6b5f8ef0_c4d58189-d550-43b2-accd-301d015ec2f8", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.snap/9dac2a63-84c3-4448-8251-c9b0776fc4fe/185c6afc-9ae6-4332-b81a-975debb7627f' to b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/3d262762-681b-471d-848e-05e9faf04c07'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0_c4d58189-d550-43b2-accd-301d015ec2f8, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "7c725858-4362-45de-9321-14ab6b5f8ef0", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp' to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] untracking f1f06a15-8b3b-472e-a302-1e70e5eecda7
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp' to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta'
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 7cfdcab3-2a40-4b85-9afc-15385e3510f9)
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec 04 10:40:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec 04 10:40:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:40:22 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 19 KiB/s wr, 5 op/s
Dec 04 10:40:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "7c725858-4362-45de-9321-14ab6b5f8ef0", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:23 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:23 compute-0 ceph-mon[75358]: pgmap v895: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 19 KiB/s wr, 5 op/s
Dec 04 10:40:23 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.iwufnj(active, since 25m)
Dec 04 10:40:23 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "ed47c747-af46-4672-ae2b-cea707990167", "format": "json"}]: dispatch
Dec 04 10:40:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed47c747-af46-4672-ae2b-cea707990167, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Dec 04 10:40:23 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Dec 04 10:40:23 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Dec 04 10:40:23 compute-0 ceph-mgr[75651]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Dec 04 10:40:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Dec 04 10:40:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f8435ce5760>
Dec 04 10:40:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:40:24 compute-0 ceph-mon[75358]: mgrmap e12: compute-0.iwufnj(active, since 25m)
Dec 04 10:40:24 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "ed47c747-af46-4672-ae2b-cea707990167", "format": "json"}]: dispatch
Dec 04 10:40:24 compute-0 ceph-mgr[75651]: [progress INFO root] Writing back 17 completed events
Dec 04 10:40:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 04 10:40:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 3 op/s
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed47c747-af46-4672-ae2b-cea707990167, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 04 10:40:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 04 10:40:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:40:25 compute-0 ceph-mon[75358]: pgmap v896: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 3 op/s
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "994f41bf-ed68-4080-9c5f-d4c5df7f4273_83da3e77-2028-498b-a84c-b65dbead073b", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273_83da3e77-2028-498b-a84c-b65dbead073b, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:25 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273_83da3e77-2028-498b-a84c-b65dbead073b, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "994f41bf-ed68-4080-9c5f-d4c5df7f4273", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:40:26
Dec 04 10:40:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:40:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:40:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms']
Dec 04 10:40:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:40:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "994f41bf-ed68-4080-9c5f-d4c5df7f4273_83da3e77-2028-498b-a84c-b65dbead073b", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:26 compute-0 ceph-mon[75358]: osdmap e127: 3 total, 3 up, 3 in
Dec 04 10:40:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "994f41bf-ed68-4080-9c5f-d4c5df7f4273", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 55 KiB/s wr, 8 op/s
Dec 04 10:40:27 compute-0 ceph-mon[75358]: pgmap v898: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 55 KiB/s wr, 8 op/s
Dec 04 10:40:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:40:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:40:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:40:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:40:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:40:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c", "format": "json"}]: dispatch
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9'' moved to trashcan
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "2080cf6d-717b-4750-a2b8-d93db758ab96_ac09cea8-b1d8-4db8-91c9-4bd4ac8f268a", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96_ac09cea8-b1d8-4db8-91c9-4bd4ac8f268a, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96_ac09cea8-b1d8-4db8-91c9-4bd4ac8f268a, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "2080cf6d-717b-4750-a2b8-d93db758ab96", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c", "format": "json"}]: dispatch
Dec 04 10:40:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec 04 10:40:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:28 compute-0 podman[250859]: 2025-12-04 10:40:28.976177191 +0000 UTC m=+0.068700020 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 04 10:40:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:40:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 56 KiB/s wr, 10 op/s
Dec 04 10:40:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "2080cf6d-717b-4750-a2b8-d93db758ab96_ac09cea8-b1d8-4db8-91c9-4bd4ac8f268a", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "2080cf6d-717b-4750-a2b8-d93db758ab96", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:29 compute-0 ceph-mon[75358]: pgmap v899: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 56 KiB/s wr, 10 op/s
Dec 04 10:40:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 04 10:40:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 04 10:40:30 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 82 KiB/s wr, 12 op/s
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe_f0ef63cc-82f0-4e30-af39-c9b2aa8ae4cb", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe_f0ef63cc-82f0-4e30-af39-c9b2aa8ae4cb, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe_f0ef63cc-82f0-4e30-af39-c9b2aa8ae4cb, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec 04 10:40:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 04 10:40:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 04 10:40:31 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 04 10:40:31 compute-0 ceph-mon[75358]: osdmap e128: 3 total, 3 up, 3 in
Dec 04 10:40:31 compute-0 ceph-mon[75358]: pgmap v901: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 82 KiB/s wr, 12 op/s
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "76e0aa1d-e6e6-4ec3-a58c-79587b9868cb_023fae40-59a0-48fc-a8f7-4e2554504fc0", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb_023fae40-59a0-48fc-a8f7-4e2554504fc0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb_023fae40-59a0-48fc-a8f7-4e2554504fc0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "76e0aa1d-e6e6-4ec3-a58c-79587b9868cb", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c_528a732c-0e32-4483-b208-a88167d57126", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c_528a732c-0e32-4483-b208-a88167d57126, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c_528a732c-0e32-4483-b208-a88167d57126, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec 04 10:40:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe_f0ef63cc-82f0-4e30-af39-c9b2aa8ae4cb", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:32 compute-0 ceph-mon[75358]: osdmap e129: 3 total, 3 up, 3 in
Dec 04 10:40:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "76e0aa1d-e6e6-4ec3-a58c-79587b9868cb_023fae40-59a0-48fc-a8f7-4e2554504fc0", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "76e0aa1d-e6e6-4ec3-a58c-79587b9868cb", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c_528a732c-0e32-4483-b208-a88167d57126", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 88 KiB/s wr, 17 op/s
Dec 04 10:40:33 compute-0 ceph-mon[75358]: pgmap v903: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 88 KiB/s wr, 17 op/s
Dec 04 10:40:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 04 10:40:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 04 10:40:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 04 10:40:34 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "980cc482-537a-4856-a203-512899e0bf5c", "format": "json"}]: dispatch
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:980cc482-537a-4856-a203-512899e0bf5c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:980cc482-537a-4856-a203-512899e0bf5c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '980cc482-537a-4856-a203-512899e0bf5c' of type subvolume
Dec 04 10:40:35 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:35.208+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '980cc482-537a-4856-a203-512899e0bf5c' of type subvolume
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c'' moved to trashcan
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 04 10:40:35 compute-0 ceph-mon[75358]: osdmap e130: 3 total, 3 up, 3 in
Dec 04 10:40:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 04 10:40:35 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 49 KiB/s wr, 11 op/s
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "82370328-067d-4dd3-9bef-3f2224bb43b9_3df864a0-f947-4289-aa56-ed58a988606b", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9_3df864a0-f947-4289-aa56-ed58a988606b, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9_3df864a0-f947-4289-aa56-ed58a988606b, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "82370328-067d-4dd3-9bef-3f2224bb43b9", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "ed47c747-af46-4672-ae2b-cea707990167_36b818cb-a4e7-4cde-b290-576e92a76d22", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed47c747-af46-4672-ae2b-cea707990167_36b818cb-a4e7-4cde-b290-576e92a76d22, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed47c747-af46-4672-ae2b-cea707990167_36b818cb-a4e7-4cde-b290-576e92a76d22, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "ed47c747-af46-4672-ae2b-cea707990167", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed47c747-af46-4672-ae2b-cea707990167, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec 04 10:40:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed47c747-af46-4672-ae2b-cea707990167, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 04 10:40:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "980cc482-537a-4856-a203-512899e0bf5c", "format": "json"}]: dispatch
Dec 04 10:40:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:36 compute-0 ceph-mon[75358]: osdmap e131: 3 total, 3 up, 3 in
Dec 04 10:40:36 compute-0 ceph-mon[75358]: pgmap v906: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 49 KiB/s wr, 11 op/s
Dec 04 10:40:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 04 10:40:36 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670355214623647 of space, bias 1.0, pg target 0.2001106564387094 quantized to 32 (current 32)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 3.9761890973508873e-05 of space, bias 4.0, pg target 0.04771426916821065 quantized to 16 (current 32)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00017169491111545225 quantized to 32 (current 32)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:40:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 04 10:40:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "82370328-067d-4dd3-9bef-3f2224bb43b9_3df864a0-f947-4289-aa56-ed58a988606b", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "82370328-067d-4dd3-9bef-3f2224bb43b9", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "ed47c747-af46-4672-ae2b-cea707990167_36b818cb-a4e7-4cde-b290-576e92a76d22", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "ed47c747-af46-4672-ae2b-cea707990167", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:37 compute-0 ceph-mon[75358]: osdmap e132: 3 total, 3 up, 3 in
Dec 04 10:40:37 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 04 10:40:37 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 04 10:40:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 44 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s wr, 10 op/s
Dec 04 10:40:38 compute-0 ceph-mon[75358]: osdmap e133: 3 total, 3 up, 3 in
Dec 04 10:40:38 compute-0 ceph-mon[75358]: pgmap v909: 321 pgs: 321 active+clean; 44 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s wr, 10 op/s
Dec 04 10:40:39 compute-0 podman[250880]: 2025-12-04 10:40:39.012458506 +0000 UTC m=+0.077935969 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:40:39 compute-0 podman[250879]: 2025-12-04 10:40:39.02117632 +0000 UTC m=+0.120475654 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4/18a0c018-b882-484c-9916-531fa9d043b1'.
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4/.meta.tmp'
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4/.meta.tmp' to config b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4/.meta'
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "format": "json"}]: dispatch
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:40:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:40:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 04 10:40:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 04 10:40:39 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.286815) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839286852, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 696, "num_deletes": 256, "total_data_size": 800292, "memory_usage": 814552, "flush_reason": "Manual Compaction"}
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839293564, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 793076, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18961, "largest_seqno": 19656, "table_properties": {"data_size": 789199, "index_size": 1593, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8982, "raw_average_key_size": 19, "raw_value_size": 781183, "raw_average_value_size": 1679, "num_data_blocks": 71, "num_entries": 465, "num_filter_entries": 465, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844810, "oldest_key_time": 1764844810, "file_creation_time": 1764844839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 6795 microseconds, and 3425 cpu microseconds.
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.293607) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 793076 bytes OK
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.293625) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295122) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295139) EVENT_LOG_v1 {"time_micros": 1764844839295135, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295159) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 796460, prev total WAL file size 796460, number of live WAL files 2.
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295648) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(774KB)], [44(6514KB)]
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839295747, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7463801, "oldest_snapshot_seqno": -1}
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "format": "json"}]: dispatch
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86910b9a-b822-4f70-bcbf-6e5bf72bae29' of type subvolume
Dec 04 10:40:39 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:39.342+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86910b9a-b822-4f70-bcbf-6e5bf72bae29' of type subvolume
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4297 keys, 7337193 bytes, temperature: kUnknown
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839346937, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7337193, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7307969, "index_size": 17402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 106919, "raw_average_key_size": 24, "raw_value_size": 7229683, "raw_average_value_size": 1682, "num_data_blocks": 728, "num_entries": 4297, "num_filter_entries": 4297, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1008 B/s rd, 128 KiB/s wr, 16 op/s
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.347498) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7337193 bytes
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.350846) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.1 rd, 142.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.4 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(18.7) write-amplify(9.3) OK, records in: 4828, records dropped: 531 output_compression: NoCompression
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.350890) EVENT_LOG_v1 {"time_micros": 1764844839350871, "job": 22, "event": "compaction_finished", "compaction_time_micros": 51456, "compaction_time_cpu_micros": 19940, "output_level": 6, "num_output_files": 1, "total_output_size": 7337193, "num_input_records": 4828, "num_output_records": 4297, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839351746, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 04 10:40:39 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:39 compute-0 ceph-mon[75358]: osdmap e134: 3 total, 3 up, 3 in
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839354721, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:39 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29'' moved to trashcan
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "format": "json"}]: dispatch
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8589c6fa-15d7-4a25-a420-527b5f3ec7d3' of type subvolume
Dec 04 10:40:39 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:39.623+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8589c6fa-15d7-4a25-a420-527b5f3ec7d3' of type subvolume
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3'' moved to trashcan
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec 04 10:40:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 04 10:40:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 04 10:40:40 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 04 10:40:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "format": "json"}]: dispatch
Dec 04 10:40:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "format": "json"}]: dispatch
Dec 04 10:40:40 compute-0 ceph-mon[75358]: pgmap v911: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1008 B/s rd, 128 KiB/s wr, 16 op/s
Dec 04 10:40:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:40 compute-0 ceph-mon[75358]: osdmap e135: 3 total, 3 up, 3 in
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:40:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 811 B/s rd, 59 KiB/s wr, 8 op/s
Dec 04 10:40:41 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "format": "json"}]: dispatch
Dec 04 10:40:41 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.414 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.415 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.415 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.415 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.445 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.446 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.446 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.446 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:40:41 compute-0 nova_compute[244644]: 2025-12-04 10:40:41.447 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:40:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:40:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2738397721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.009 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.176 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.178 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5077MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.179 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.179 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.323 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.323 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.343 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:40:42 compute-0 ceph-mon[75358]: pgmap v913: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 811 B/s rd, 59 KiB/s wr, 8 op/s
Dec 04 10:40:42 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2738397721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:40:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:40:42 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187427323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.964 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:40:42 compute-0 nova_compute[244644]: 2025-12-04 10:40:42.971 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:40:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 50 KiB/s wr, 11 op/s
Dec 04 10:40:43 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3187427323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:40:43 compute-0 ceph-mon[75358]: pgmap v914: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 50 KiB/s wr, 11 op/s
Dec 04 10:40:43 compute-0 nova_compute[244644]: 2025-12-04 10:40:43.794 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:40:43 compute-0 nova_compute[244644]: 2025-12-04 10:40:43.796 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:40:43 compute-0 nova_compute[244644]: 2025-12-04 10:40:43.796 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:40:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:40:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 04 10:40:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 04 10:40:44 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 04 10:40:44 compute-0 sshd-session[250968]: Invalid user deploy from 107.175.213.239 port 54680
Dec 04 10:40:44 compute-0 sshd-session[250968]: Received disconnect from 107.175.213.239 port 54680:11: Bye Bye [preauth]
Dec 04 10:40:44 compute-0 sshd-session[250968]: Disconnected from invalid user deploy 107.175.213.239 port 54680 [preauth]
Dec 04 10:40:44 compute-0 nova_compute[244644]: 2025-12-04 10:40:44.792 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:40:44 compute-0 nova_compute[244644]: 2025-12-04 10:40:44.792 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:40:44 compute-0 nova_compute[244644]: 2025-12-04 10:40:44.792 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:40:44 compute-0 nova_compute[244644]: 2025-12-04 10:40:44.793 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:40:45 compute-0 ceph-mon[75358]: osdmap e136: 3 total, 3 up, 3 in
Dec 04 10:40:45 compute-0 nova_compute[244644]: 2025-12-04 10:40:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:40:45 compute-0 nova_compute[244644]: 2025-12-04 10:40:45.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 843 B/s rd, 48 KiB/s wr, 6 op/s
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "format": "json"}]: dispatch
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:45 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:45.769+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '369e894d-504a-4bdd-99b2-2e34e29db9b4' of type subvolume
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '369e894d-504a-4bdd-99b2-2e34e29db9b4' of type subvolume
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4'' moved to trashcan
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec 04 10:40:46 compute-0 ceph-mon[75358]: pgmap v916: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 843 B/s rd, 48 KiB/s wr, 6 op/s
Dec 04 10:40:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 55 KiB/s wr, 6 op/s
Dec 04 10:40:47 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "format": "json"}]: dispatch
Dec 04 10:40:47 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:48 compute-0 ceph-mon[75358]: pgmap v917: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 55 KiB/s wr, 6 op/s
Dec 04 10:40:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:40:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 49 KiB/s wr, 6 op/s
Dec 04 10:40:49 compute-0 ceph-mon[75358]: pgmap v918: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 49 KiB/s wr, 6 op/s
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 18 KiB/s wr, 4 op/s
Dec 04 10:40:51 compute-0 ceph-mon[75358]: pgmap v919: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 18 KiB/s wr, 4 op/s
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58/2101def7-9ea1-4e61-bdd7-4cd9a9dd7b54'.
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58/.meta.tmp'
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58/.meta.tmp' to config b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58/.meta'
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "format": "json"}]: dispatch
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec 04 10:40:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec 04 10:40:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:40:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:52 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:52 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "format": "json"}]: dispatch
Dec 04 10:40:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 18 KiB/s wr, 3 op/s
Dec 04 10:40:53 compute-0 ceph-mon[75358]: pgmap v920: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 18 KiB/s wr, 3 op/s
Dec 04 10:40:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:40:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec 04 10:40:54 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93/869f5edd-b477-4fc8-89df-7313aed09736'.
Dec 04 10:40:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93/.meta.tmp'
Dec 04 10:40:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93/.meta.tmp' to config b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93/.meta'
Dec 04 10:40:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec 04 10:40:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "format": "json"}]: dispatch
Dec 04 10:40:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec 04 10:40:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec 04 10:40:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:40:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:40:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:40:54.906 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:40:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:40:54.906 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:40:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:40:54.906 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:40:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 184 B/s rd, 16 KiB/s wr, 2 op/s
Dec 04 10:40:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:40:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "format": "json"}]: dispatch
Dec 04 10:40:56 compute-0 ceph-mon[75358]: pgmap v921: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 184 B/s rd, 16 KiB/s wr, 2 op/s
Dec 04 10:40:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec 04 10:40:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:40:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:40:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:40:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:40:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:40:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:40:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "format": "json"}]: dispatch
Dec 04 10:40:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:40:58 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:58.413+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4a5cb54-f925-4ec3-ad46-31a41be6ac58' of type subvolume
Dec 04 10:40:58 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4a5cb54-f925-4ec3-ad46-31a41be6ac58' of type subvolume
Dec 04 10:40:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec 04 10:40:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58'' moved to trashcan
Dec 04 10:40:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:40:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec 04 10:40:58 compute-0 ceph-mon[75358]: pgmap v922: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec 04 10:40:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:40:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 17 KiB/s wr, 3 op/s
Dec 04 10:40:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "format": "json"}]: dispatch
Dec 04 10:40:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "force": true, "format": "json"}]: dispatch
Dec 04 10:40:59 compute-0 ceph-mon[75358]: pgmap v923: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 17 KiB/s wr, 3 op/s
Dec 04 10:40:59 compute-0 podman[250970]: 2025-12-04 10:40:59.949230643 +0000 UTC m=+0.055305261 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 04 10:41:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 3 op/s
Dec 04 10:41:01 compute-0 ceph-mon[75358]: pgmap v924: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 3 op/s
Dec 04 10:41:03 compute-0 sshd-session[250992]: Invalid user alex from 74.249.218.27 port 48892
Dec 04 10:41:03 compute-0 sshd-session[250992]: Received disconnect from 74.249.218.27 port 48892:11: Bye Bye [preauth]
Dec 04 10:41:03 compute-0 sshd-session[250992]: Disconnected from invalid user alex 74.249.218.27 port 48892 [preauth]
Dec 04 10:41:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 4 op/s
Dec 04 10:41:04 compute-0 ceph-mon[75358]: pgmap v925: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 4 op/s
Dec 04 10:41:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 32 KiB/s wr, 3 op/s
Dec 04 10:41:05 compute-0 ceph-mon[75358]: pgmap v926: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 32 KiB/s wr, 3 op/s
Dec 04 10:41:06 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:41:06.885 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:41:06 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:41:06.887 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:41:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 37 KiB/s wr, 4 op/s
Dec 04 10:41:07 compute-0 ceph-mon[75358]: pgmap v927: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 37 KiB/s wr, 4 op/s
Dec 04 10:41:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 23 KiB/s wr, 3 op/s
Dec 04 10:41:09 compute-0 ceph-mon[75358]: pgmap v928: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 23 KiB/s wr, 3 op/s
Dec 04 10:41:09 compute-0 podman[250995]: 2025-12-04 10:41:09.954233273 +0000 UTC m=+0.055308714 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:41:09 compute-0 podman[250994]: 2025-12-04 10:41:09.981799847 +0000 UTC m=+0.087773269 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 04 10:41:10 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:41:10.889 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:41:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 2 op/s
Dec 04 10:41:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:41:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1962865581' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:41:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:41:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1962865581' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:41:11 compute-0 ceph-mon[75358]: pgmap v929: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 2 op/s
Dec 04 10:41:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1962865581' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:41:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1962865581' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:41:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec 04 10:41:12 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad/b30d8655-9d07-48f5-9b2a-5c00b9d7715b'.
Dec 04 10:41:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad/.meta.tmp'
Dec 04 10:41:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad/.meta.tmp' to config b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad/.meta'
Dec 04 10:41:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec 04 10:41:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "format": "json"}]: dispatch
Dec 04 10:41:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec 04 10:41:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec 04 10:41:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:12 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.7 KiB/s wr, 1 op/s
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "format": "json"}]: dispatch
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8fa51969-52d7-4794-a864-cda7f0a42b93, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8fa51969-52d7-4794-a864-cda7f0a42b93, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8fa51969-52d7-4794-a864-cda7f0a42b93' of type subvolume
Dec 04 10:41:13 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:13.699+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8fa51969-52d7-4794-a864-cda7f0a42b93' of type subvolume
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93'' moved to trashcan
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:41:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec 04 10:41:13 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:13 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "format": "json"}]: dispatch
Dec 04 10:41:13 compute-0 ceph-mon[75358]: pgmap v930: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.7 KiB/s wr, 1 op/s
Dec 04 10:41:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "format": "json"}]: dispatch
Dec 04 10:41:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s wr, 0 op/s
Dec 04 10:41:15 compute-0 ceph-mon[75358]: pgmap v931: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s wr, 0 op/s
Dec 04 10:41:16 compute-0 sudo[251040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:41:16 compute-0 sudo[251040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:41:16 compute-0 sudo[251040]: pam_unix(sudo:session): session closed for user root
Dec 04 10:41:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec 04 10:41:16 compute-0 sudo[251065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:41:16 compute-0 sudo[251065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:41:16 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502/b0e8816f-0808-444b-8920-ec78ecd56640'.
Dec 04 10:41:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502/.meta.tmp'
Dec 04 10:41:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502/.meta.tmp' to config b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502/.meta'
Dec 04 10:41:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec 04 10:41:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "format": "json"}]: dispatch
Dec 04 10:41:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec 04 10:41:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec 04 10:41:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:16 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:17 compute-0 sudo[251065]: pam_unix(sudo:session): session closed for user root
Dec 04 10:41:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:41:17 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:41:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:41:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:41:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:41:17 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:41:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:41:17 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 1 op/s
Dec 04 10:41:17 compute-0 sudo[251120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:41:17 compute-0 sudo[251120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:41:17 compute-0 sudo[251120]: pam_unix(sudo:session): session closed for user root
Dec 04 10:41:17 compute-0 sudo[251145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:41:17 compute-0 sudo[251145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:41:17 compute-0 podman[251183]: 2025-12-04 10:41:17.729674736 +0000 UTC m=+0.060501922 container create 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:41:17 compute-0 podman[251183]: 2025-12-04 10:41:17.692703579 +0000 UTC m=+0.023530785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:41:17 compute-0 systemd[1]: Started libpod-conmon-135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8.scope.
Dec 04 10:41:17 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:41:17 compute-0 podman[251183]: 2025-12-04 10:41:17.978826678 +0000 UTC m=+0.309653944 container init 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 10:41:17 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "format": "json"}]: dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:41:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:41:17 compute-0 ceph-mon[75358]: pgmap v932: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 1 op/s
Dec 04 10:41:17 compute-0 podman[251183]: 2025-12-04 10:41:17.990058996 +0000 UTC m=+0.320886222 container start 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:41:17 compute-0 podman[251183]: 2025-12-04 10:41:17.994061566 +0000 UTC m=+0.324888832 container attach 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:41:18 compute-0 zen_shannon[251199]: 167 167
Dec 04 10:41:18 compute-0 systemd[1]: libpod-135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8.scope: Deactivated successfully.
Dec 04 10:41:18 compute-0 podman[251183]: 2025-12-04 10:41:18.002798502 +0000 UTC m=+0.333625688 container died 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:41:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-16a01b6324d1fadece11fb4f237fe6f99e99759b1e183e52bd44461e1321e126-merged.mount: Deactivated successfully.
Dec 04 10:41:18 compute-0 podman[251183]: 2025-12-04 10:41:18.327905209 +0000 UTC m=+0.658732405 container remove 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:41:18 compute-0 systemd[1]: libpod-conmon-135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8.scope: Deactivated successfully.
Dec 04 10:41:18 compute-0 podman[251221]: 2025-12-04 10:41:18.479524792 +0000 UTC m=+0.026061608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:41:18 compute-0 podman[251221]: 2025-12-04 10:41:18.61770773 +0000 UTC m=+0.164244536 container create 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:41:18 compute-0 systemd[1]: Started libpod-conmon-472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b.scope.
Dec 04 10:41:18 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:19 compute-0 podman[251221]: 2025-12-04 10:41:19.113515213 +0000 UTC m=+0.660052019 container init 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:41:19 compute-0 podman[251221]: 2025-12-04 10:41:19.120352012 +0000 UTC m=+0.666888808 container start 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:41:19 compute-0 podman[251221]: 2025-12-04 10:41:19.186026282 +0000 UTC m=+0.732563098 container attach 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:41:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 2 op/s
Dec 04 10:41:19 compute-0 magical_spence[251238]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:41:19 compute-0 magical_spence[251238]: --> All data devices are unavailable
Dec 04 10:41:19 compute-0 systemd[1]: libpod-472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b.scope: Deactivated successfully.
Dec 04 10:41:19 compute-0 podman[251221]: 2025-12-04 10:41:19.622619525 +0000 UTC m=+1.169156351 container died 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:41:19 compute-0 ceph-mon[75358]: pgmap v933: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 2 op/s
Dec 04 10:41:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6-merged.mount: Deactivated successfully.
Dec 04 10:41:20 compute-0 podman[251221]: 2025-12-04 10:41:20.208996215 +0000 UTC m=+1.755533001 container remove 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:41:20 compute-0 sudo[251145]: pam_unix(sudo:session): session closed for user root
Dec 04 10:41:20 compute-0 systemd[1]: libpod-conmon-472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b.scope: Deactivated successfully.
Dec 04 10:41:20 compute-0 sudo[251269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:41:20 compute-0 sudo[251269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:41:20 compute-0 sudo[251269]: pam_unix(sudo:session): session closed for user root
Dec 04 10:41:20 compute-0 sudo[251294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:41:20 compute-0 sudo[251294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:41:20 compute-0 podman[251331]: 2025-12-04 10:41:20.647519816 +0000 UTC m=+0.023260907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:41:20 compute-0 podman[251331]: 2025-12-04 10:41:20.839972042 +0000 UTC m=+0.215713123 container create 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:41:20 compute-0 systemd[1]: Started libpod-conmon-8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f.scope.
Dec 04 10:41:20 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:41:21 compute-0 podman[251331]: 2025-12-04 10:41:21.279885007 +0000 UTC m=+0.655626128 container init 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:41:21 compute-0 podman[251331]: 2025-12-04 10:41:21.287130437 +0000 UTC m=+0.662871518 container start 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:41:21 compute-0 relaxed_shamir[251348]: 167 167
Dec 04 10:41:21 compute-0 systemd[1]: libpod-8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f.scope: Deactivated successfully.
Dec 04 10:41:21 compute-0 podman[251331]: 2025-12-04 10:41:21.347507375 +0000 UTC m=+0.723248466 container attach 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 04 10:41:21 compute-0 podman[251331]: 2025-12-04 10:41:21.34811838 +0000 UTC m=+0.723859461 container died 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec 04 10:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-33ee2a9d1d4c70e72deed30ae2b1b27310e5ee971a4008b60ff650e8b04ee734-merged.mount: Deactivated successfully.
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "format": "json"}]: dispatch
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3a8de81a-77b5-415f-8412-5f7da4d28502, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3a8de81a-77b5-415f-8412-5f7da4d28502, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3a8de81a-77b5-415f-8412-5f7da4d28502' of type subvolume
Dec 04 10:41:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:21.702+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3a8de81a-77b5-415f-8412-5f7da4d28502' of type subvolume
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec 04 10:41:21 compute-0 ceph-mon[75358]: pgmap v934: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502'' moved to trashcan
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:41:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec 04 10:41:21 compute-0 podman[251331]: 2025-12-04 10:41:21.805965751 +0000 UTC m=+1.181706812 container remove 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:41:21 compute-0 systemd[1]: libpod-conmon-8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f.scope: Deactivated successfully.
Dec 04 10:41:22 compute-0 podman[251373]: 2025-12-04 10:41:22.022388111 +0000 UTC m=+0.089887771 container create 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:41:22 compute-0 podman[251373]: 2025-12-04 10:41:21.95665491 +0000 UTC m=+0.024154590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:41:22 compute-0 systemd[1]: Started libpod-conmon-7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8.scope.
Dec 04 10:41:22 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:22 compute-0 podman[251373]: 2025-12-04 10:41:22.352886071 +0000 UTC m=+0.420385741 container init 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:41:22 compute-0 podman[251373]: 2025-12-04 10:41:22.360329076 +0000 UTC m=+0.427828736 container start 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:41:22 compute-0 podman[251373]: 2025-12-04 10:41:22.364754066 +0000 UTC m=+0.432253736 container attach 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]: {
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:     "0": [
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:         {
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "devices": [
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "/dev/loop3"
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             ],
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_name": "ceph_lv0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_size": "21470642176",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "name": "ceph_lv0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "tags": {
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.cluster_name": "ceph",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.crush_device_class": "",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.encrypted": "0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.objectstore": "bluestore",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.osd_id": "0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.type": "block",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.vdo": "0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.with_tpm": "0"
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             },
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "type": "block",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "vg_name": "ceph_vg0"
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:         }
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:     ],
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:     "1": [
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:         {
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "devices": [
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "/dev/loop4"
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             ],
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_name": "ceph_lv1",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_size": "21470642176",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "name": "ceph_lv1",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "tags": {
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.cluster_name": "ceph",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.crush_device_class": "",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.encrypted": "0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.objectstore": "bluestore",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.osd_id": "1",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.type": "block",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.vdo": "0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.with_tpm": "0"
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             },
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "type": "block",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "vg_name": "ceph_vg1"
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:         }
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:     ],
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:     "2": [
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:         {
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "devices": [
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "/dev/loop5"
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             ],
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_name": "ceph_lv2",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_size": "21470642176",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "name": "ceph_lv2",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "tags": {
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.cluster_name": "ceph",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.crush_device_class": "",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.encrypted": "0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.objectstore": "bluestore",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.osd_id": "2",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.type": "block",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.vdo": "0",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:                 "ceph.with_tpm": "0"
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             },
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "type": "block",
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:             "vg_name": "ceph_vg2"
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:         }
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]:     ]
Dec 04 10:41:22 compute-0 eloquent_thompson[251389]: }
Dec 04 10:41:22 compute-0 systemd[1]: libpod-7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8.scope: Deactivated successfully.
Dec 04 10:41:22 compute-0 podman[251373]: 2025-12-04 10:41:22.678823789 +0000 UTC m=+0.746323449 container died 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 10:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c-merged.mount: Deactivated successfully.
Dec 04 10:41:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "format": "json"}]: dispatch
Dec 04 10:41:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:22 compute-0 podman[251373]: 2025-12-04 10:41:22.726740038 +0000 UTC m=+0.794239698 container remove 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:41:22 compute-0 systemd[1]: libpod-conmon-7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8.scope: Deactivated successfully.
Dec 04 10:41:22 compute-0 sudo[251294]: pam_unix(sudo:session): session closed for user root
Dec 04 10:41:22 compute-0 sudo[251410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:41:22 compute-0 sudo[251410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:41:22 compute-0 sudo[251410]: pam_unix(sudo:session): session closed for user root
Dec 04 10:41:22 compute-0 sudo[251437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:41:22 compute-0 sudo[251437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:41:23 compute-0 podman[251473]: 2025-12-04 10:41:23.275452793 +0000 UTC m=+0.103150011 container create 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:41:23 compute-0 podman[251473]: 2025-12-04 10:41:23.195711414 +0000 UTC m=+0.023408652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:41:23 compute-0 systemd[1]: Started libpod-conmon-2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7.scope.
Dec 04 10:41:23 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:41:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec 04 10:41:23 compute-0 podman[251473]: 2025-12-04 10:41:23.399625424 +0000 UTC m=+0.227322662 container init 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 04 10:41:23 compute-0 podman[251473]: 2025-12-04 10:41:23.407681794 +0000 UTC m=+0.235379012 container start 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:41:23 compute-0 epic_beaver[251490]: 167 167
Dec 04 10:41:23 compute-0 systemd[1]: libpod-2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7.scope: Deactivated successfully.
Dec 04 10:41:23 compute-0 podman[251473]: 2025-12-04 10:41:23.417009985 +0000 UTC m=+0.244707223 container attach 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:41:23 compute-0 podman[251473]: 2025-12-04 10:41:23.417602131 +0000 UTC m=+0.245299359 container died 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d7ea25e8d4dd288d21e248a1008d9b3541759c76959e29ac3aebb92144c7041-merged.mount: Deactivated successfully.
Dec 04 10:41:23 compute-0 podman[251473]: 2025-12-04 10:41:23.546599491 +0000 UTC m=+0.374296709 container remove 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:41:23 compute-0 systemd[1]: libpod-conmon-2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7.scope: Deactivated successfully.
Dec 04 10:41:23 compute-0 podman[251513]: 2025-12-04 10:41:23.751760322 +0000 UTC m=+0.078297324 container create f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:41:23 compute-0 ceph-mon[75358]: pgmap v935: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec 04 10:41:23 compute-0 podman[251513]: 2025-12-04 10:41:23.699729511 +0000 UTC m=+0.026266543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:41:23 compute-0 systemd[1]: Started libpod-conmon-f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9.scope.
Dec 04 10:41:23 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:41:23 compute-0 podman[251513]: 2025-12-04 10:41:23.865197136 +0000 UTC m=+0.191734158 container init f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:41:23 compute-0 podman[251513]: 2025-12-04 10:41:23.873356939 +0000 UTC m=+0.199893941 container start f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:41:23 compute-0 podman[251513]: 2025-12-04 10:41:23.906690466 +0000 UTC m=+0.233227468 container attach f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:41:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:24 compute-0 lvm[251608]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:41:24 compute-0 lvm[251605]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:41:24 compute-0 lvm[251605]: VG ceph_vg0 finished
Dec 04 10:41:24 compute-0 lvm[251608]: VG ceph_vg1 finished
Dec 04 10:41:24 compute-0 lvm[251610]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:41:24 compute-0 lvm[251610]: VG ceph_vg2 finished
Dec 04 10:41:24 compute-0 lvm[251611]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:41:24 compute-0 lvm[251611]: VG ceph_vg1 finished
Dec 04 10:41:24 compute-0 lvm[251613]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:41:24 compute-0 lvm[251613]: VG ceph_vg2 finished
Dec 04 10:41:24 compute-0 hopeful_heisenberg[251529]: {}
Dec 04 10:41:24 compute-0 systemd[1]: libpod-f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9.scope: Deactivated successfully.
Dec 04 10:41:24 compute-0 systemd[1]: libpod-f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9.scope: Consumed 1.449s CPU time.
Dec 04 10:41:24 compute-0 podman[251513]: 2025-12-04 10:41:24.766997883 +0000 UTC m=+1.093534885 container died f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 04 10:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c-merged.mount: Deactivated successfully.
Dec 04 10:41:24 compute-0 podman[251513]: 2025-12-04 10:41:24.975829284 +0000 UTC m=+1.302366286 container remove f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:41:24 compute-0 systemd[1]: libpod-conmon-f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9.scope: Deactivated successfully.
Dec 04 10:41:25 compute-0 sudo[251437]: pam_unix(sudo:session): session closed for user root
Dec 04 10:41:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:41:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:41:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:41:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec 04 10:41:25 compute-0 sudo[251627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:41:25 compute-0 sudo[251627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:41:25 compute-0 sudo[251627]: pam_unix(sudo:session): session closed for user root
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b/7ba28297-c9db-4f6b-88f7-45beda1e2ba0'.
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b/.meta.tmp'
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b/.meta.tmp' to config b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b/.meta'
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "format": "json"}]: dispatch
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec 04 10:41:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec 04 10:41:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:41:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:41:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:26 compute-0 ceph-mon[75358]: pgmap v936: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec 04 10:41:26 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:41:26
Dec 04 10:41:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:41:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:41:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta', 'vms']
Dec 04 10:41:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:41:27 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "format": "json"}]: dispatch
Dec 04 10:41:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 46 KiB/s wr, 4 op/s
Dec 04 10:41:27 compute-0 sshd-session[251431]: Connection closed by 103.149.86.230 port 59554 [preauth]
Dec 04 10:41:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:41:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8413aa5c40>)]
Dec 04 10:41:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:41:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:41:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8435ce5b80>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f84183d26a0>)]
Dec 04 10:41:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:41:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:41:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:41:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:41:28 compute-0 ceph-mon[75358]: pgmap v937: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 46 KiB/s wr, 4 op/s
Dec 04 10:41:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 5 op/s
Dec 04 10:41:29 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.iwufnj(active, since 27m)
Dec 04 10:41:29 compute-0 ceph-mon[75358]: pgmap v938: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 5 op/s
Dec 04 10:41:29 compute-0 ceph-mon[75358]: mgrmap e13: compute-0.iwufnj(active, since 27m)
Dec 04 10:41:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "format": "json"}]: dispatch
Dec 04 10:41:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:30 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:30.617+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1a650b0-8a39-49d0-8761-9a38bedfef6b' of type subvolume
Dec 04 10:41:30 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1a650b0-8a39-49d0-8761-9a38bedfef6b' of type subvolume
Dec 04 10:41:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec 04 10:41:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b'' moved to trashcan
Dec 04 10:41:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:41:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec 04 10:41:30 compute-0 podman[251652]: 2025-12-04 10:41:30.9652753 +0000 UTC m=+0.068657514 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8e063322-2225-425c-8041-94c64095457f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f/d8c06adf-7d8c-42f7-8e3d-861c1d60ede8'.
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f/.meta.tmp'
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f/.meta.tmp' to config b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f/.meta'
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8e063322-2225-425c-8041-94c64095457f", "format": "json"}]: dispatch
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec 04 10:41:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:31 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:31 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 44 KiB/s wr, 4 op/s
Dec 04 10:41:31 compute-0 sshd-session[251672]: Invalid user redmine from 217.154.62.22 port 53584
Dec 04 10:41:32 compute-0 sshd-session[251672]: Received disconnect from 217.154.62.22 port 53584:11: Bye Bye [preauth]
Dec 04 10:41:32 compute-0 sshd-session[251672]: Disconnected from invalid user redmine 217.154.62.22 port 53584 [preauth]
Dec 04 10:41:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "format": "json"}]: dispatch
Dec 04 10:41:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8e063322-2225-425c-8041-94c64095457f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8e063322-2225-425c-8041-94c64095457f", "format": "json"}]: dispatch
Dec 04 10:41:32 compute-0 ceph-mon[75358]: pgmap v939: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 44 KiB/s wr, 4 op/s
Dec 04 10:41:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 4 op/s
Dec 04 10:41:33 compute-0 ceph-mon[75358]: pgmap v940: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 4 op/s
Dec 04 10:41:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b37d179f-5d92-4510-9538-6c9b03887871", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec 04 10:41:34 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871/37c965e6-b9df-4d0f-8913-3188a3bb9352'.
Dec 04 10:41:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871/.meta.tmp'
Dec 04 10:41:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871/.meta.tmp' to config b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871/.meta'
Dec 04 10:41:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec 04 10:41:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b37d179f-5d92-4510-9538-6c9b03887871", "format": "json"}]: dispatch
Dec 04 10:41:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec 04 10:41:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec 04 10:41:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:34 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b37d179f-5d92-4510-9538-6c9b03887871", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:34 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b37d179f-5d92-4510-9538-6c9b03887871", "format": "json"}]: dispatch
Dec 04 10:41:34 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 3 op/s
Dec 04 10:41:35 compute-0 ceph-mon[75358]: pgmap v941: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 3 op/s
Dec 04 10:41:36 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8e063322-2225-425c-8041-94c64095457f", "format": "json"}]: dispatch
Dec 04 10:41:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8e063322-2225-425c-8041-94c64095457f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8e063322-2225-425c-8041-94c64095457f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:36 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8e063322-2225-425c-8041-94c64095457f' of type subvolume
Dec 04 10:41:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:36.107+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8e063322-2225-425c-8041-94c64095457f' of type subvolume
Dec 04 10:41:36 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8e063322-2225-425c-8041-94c64095457f", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec 04 10:41:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f'' moved to trashcan
Dec 04 10:41:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:41:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec 04 10:41:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8e063322-2225-425c-8041-94c64095457f", "format": "json"}]: dispatch
Dec 04 10:41:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8e063322-2225-425c-8041-94c64095457f", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670871423372434 of space, bias 1.0, pg target 0.20012614270117302 quantized to 32 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.437115334840369e-05 of space, bias 4.0, pg target 0.07724538401808442 quantized to 16 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:41:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 41 KiB/s wr, 4 op/s
Dec 04 10:41:38 compute-0 ceph-mon[75358]: pgmap v942: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 41 KiB/s wr, 4 op/s
Dec 04 10:41:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b37d179f-5d92-4510-9538-6c9b03887871", "format": "json"}]: dispatch
Dec 04 10:41:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b37d179f-5d92-4510-9538-6c9b03887871, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b37d179f-5d92-4510-9538-6c9b03887871, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:38 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b37d179f-5d92-4510-9538-6c9b03887871' of type subvolume
Dec 04 10:41:38 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:38.311+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b37d179f-5d92-4510-9538-6c9b03887871' of type subvolume
Dec 04 10:41:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b37d179f-5d92-4510-9538-6c9b03887871", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec 04 10:41:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871'' moved to trashcan
Dec 04 10:41:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:41:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec 04 10:41:38 compute-0 sshd-session[251674]: Invalid user deploy from 103.179.218.243 port 42980
Dec 04 10:41:39 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b37d179f-5d92-4510-9538-6c9b03887871", "format": "json"}]: dispatch
Dec 04 10:41:39 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b37d179f-5d92-4510-9538-6c9b03887871", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:39 compute-0 sshd-session[251674]: Received disconnect from 103.179.218.243 port 42980:11: Bye Bye [preauth]
Dec 04 10:41:39 compute-0 sshd-session[251674]: Disconnected from invalid user deploy 103.179.218.243 port 42980 [preauth]
Dec 04 10:41:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 4 op/s
Dec 04 10:41:40 compute-0 ceph-mon[75358]: pgmap v943: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 4 op/s
Dec 04 10:41:41 compute-0 nova_compute[244644]: 2025-12-04 10:41:41.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:41 compute-0 nova_compute[244644]: 2025-12-04 10:41:41.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:41:41 compute-0 nova_compute[244644]: 2025-12-04 10:41:41.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:41:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 3 op/s
Dec 04 10:41:41 compute-0 podman[251677]: 2025-12-04 10:41:41.398259107 +0000 UTC m=+0.494216604 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 04 10:41:41 compute-0 podman[251676]: 2025-12-04 10:41:41.410239094 +0000 UTC m=+0.504945330 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 04 10:41:41 compute-0 nova_compute[244644]: 2025-12-04 10:41:41.412 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:41:41 compute-0 nova_compute[244644]: 2025-12-04 10:41:41.413 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:41 compute-0 ceph-mon[75358]: pgmap v944: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 3 op/s
Dec 04 10:41:41 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec 04 10:41:42 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced/4710ae0a-ec4d-4e62-8fda-a8295c2f620f'.
Dec 04 10:41:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced/.meta.tmp'
Dec 04 10:41:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced/.meta.tmp' to config b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced/.meta'
Dec 04 10:41:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec 04 10:41:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "format": "json"}]: dispatch
Dec 04 10:41:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec 04 10:41:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec 04 10:41:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:42 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:42 compute-0 nova_compute[244644]: 2025-12-04 10:41:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:42 compute-0 nova_compute[244644]: 2025-12-04 10:41:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:42 compute-0 nova_compute[244644]: 2025-12-04 10:41:42.367 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:41:42 compute-0 nova_compute[244644]: 2025-12-04 10:41:42.368 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:41:42 compute-0 nova_compute[244644]: 2025-12-04 10:41:42.368 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:41:42 compute-0 nova_compute[244644]: 2025-12-04 10:41:42.368 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:41:42 compute-0 nova_compute[244644]: 2025-12-04 10:41:42.369 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:41:42 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:42 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "format": "json"}]: dispatch
Dec 04 10:41:42 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:41:42 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279979488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:41:42 compute-0 nova_compute[244644]: 2025-12-04 10:41:42.934 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:41:43 compute-0 nova_compute[244644]: 2025-12-04 10:41:43.085 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:41:43 compute-0 nova_compute[244644]: 2025-12-04 10:41:43.087 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:41:43 compute-0 nova_compute[244644]: 2025-12-04 10:41:43.087 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:41:43 compute-0 nova_compute[244644]: 2025-12-04 10:41:43.087 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:41:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 39 KiB/s wr, 6 op/s
Dec 04 10:41:43 compute-0 nova_compute[244644]: 2025-12-04 10:41:43.618 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:41:43 compute-0 nova_compute[244644]: 2025-12-04 10:41:43.619 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:41:43 compute-0 nova_compute[244644]: 2025-12-04 10:41:43.644 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:41:43 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1279979488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:41:43 compute-0 ceph-mon[75358]: pgmap v945: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 39 KiB/s wr, 6 op/s
Dec 04 10:41:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:41:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569300093' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:41:44 compute-0 nova_compute[244644]: 2025-12-04 10:41:44.170 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:41:44 compute-0 nova_compute[244644]: 2025-12-04 10:41:44.175 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:41:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:45 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3569300093' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:41:45 compute-0 nova_compute[244644]: 2025-12-04 10:41:45.374 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:41:45 compute-0 nova_compute[244644]: 2025-12-04 10:41:45.376 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:41:45 compute-0 nova_compute[244644]: 2025-12-04 10:41:45.376 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:41:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:41:46 compute-0 ceph-mon[75358]: pgmap v946: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:41:46 compute-0 nova_compute[244644]: 2025-12-04 10:41:46.376 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:46 compute-0 nova_compute[244644]: 2025-12-04 10:41:46.377 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:46 compute-0 nova_compute[244644]: 2025-12-04 10:41:46.396 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:46 compute-0 nova_compute[244644]: 2025-12-04 10:41:46.397 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:46 compute-0 nova_compute[244644]: 2025-12-04 10:41:46.397 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:46 compute-0 nova_compute[244644]: 2025-12-04 10:41:46.397 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:41:46 compute-0 nova_compute[244644]: 2025-12-04 10:41:46.397 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:41:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:41:47 compute-0 ceph-mon[75358]: pgmap v947: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:41:48 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "format": "json"}]: dispatch
Dec 04 10:41:48 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:48 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:48 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:48.725+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a5f4ecd-03b6-407a-8d82-15daa95b5ced' of type subvolume
Dec 04 10:41:48 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a5f4ecd-03b6-407a-8d82-15daa95b5ced' of type subvolume
Dec 04 10:41:48 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:48 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec 04 10:41:48 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced'' moved to trashcan
Dec 04 10:41:48 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:41:48 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec 04 10:41:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 28 KiB/s wr, 4 op/s
Dec 04 10:41:49 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "format": "json"}]: dispatch
Dec 04 10:41:49 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:49 compute-0 ceph-mon[75358]: pgmap v948: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 28 KiB/s wr, 4 op/s
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 23 KiB/s wr, 3 op/s
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec 04 10:41:51 compute-0 ceph-mon[75358]: pgmap v949: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 23 KiB/s wr, 3 op/s
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa/a99bfaa6-75dd-4a13-893b-da8b9b54dca0'.
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa/.meta.tmp'
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa/.meta.tmp' to config b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa/.meta'
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "format": "json"}]: dispatch
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec 04 10:41:51 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec 04 10:41:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:52 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:52 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "format": "json"}]: dispatch
Dec 04 10:41:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c/c8254e14-3b4f-4a93-a1ae-bdb20560cbeb'.
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c/.meta.tmp'
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c/.meta.tmp' to config b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c/.meta'
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "format": "json"}]: dispatch
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec 04 10:41:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 40 KiB/s wr, 5 op/s
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cda2cc19-4836-4171-8f02-990e4046f802", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802/6b0b7784-b07f-495c-ad5e-81986ac7be36'.
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802/.meta.tmp'
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802/.meta.tmp' to config b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802/.meta'
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cda2cc19-4836-4171-8f02-990e4046f802", "format": "json"}]: dispatch
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec 04 10:41:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:53 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:53 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "format": "json"}]: dispatch
Dec 04 10:41:53 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:53 compute-0 ceph-mon[75358]: pgmap v950: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 40 KiB/s wr, 5 op/s
Dec 04 10:41:53 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec 04 10:41:54 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16/31f73d8a-d868-48e2-8b85-21117fdcc89e'.
Dec 04 10:41:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16/.meta.tmp'
Dec 04 10:41:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16/.meta.tmp' to config b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16/.meta'
Dec 04 10:41:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec 04 10:41:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "format": "json"}]: dispatch
Dec 04 10:41:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec 04 10:41:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec 04 10:41:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cda2cc19-4836-4171-8f02-990e4046f802", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cda2cc19-4836-4171-8f02-990e4046f802", "format": "json"}]: dispatch
Dec 04 10:41:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "format": "json"}]: dispatch
Dec 04 10:41:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:41:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:41:54.907 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:41:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:41:54.908 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:41:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:41:54.908 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:41:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 20 KiB/s wr, 2 op/s
Dec 04 10:41:55 compute-0 ceph-mon[75358]: pgmap v951: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 20 KiB/s wr, 2 op/s
Dec 04 10:41:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "format": "json"}]: dispatch
Dec 04 10:41:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:56 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:56.351+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c54a12b3-b92e-4a09-81b2-2bfc280d4eaa' of type subvolume
Dec 04 10:41:56 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c54a12b3-b92e-4a09-81b2-2bfc280d4eaa' of type subvolume
Dec 04 10:41:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec 04 10:41:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa'' moved to trashcan
Dec 04 10:41:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:41:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec 04 10:41:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "format": "json"}]: dispatch
Dec 04 10:41:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cda2cc19-4836-4171-8f02-990e4046f802", "format": "json"}]: dispatch
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cda2cc19-4836-4171-8f02-990e4046f802, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cda2cc19-4836-4171-8f02-990e4046f802, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:57 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:57.155+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cda2cc19-4836-4171-8f02-990e4046f802' of type subvolume
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cda2cc19-4836-4171-8f02-990e4046f802' of type subvolume
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cda2cc19-4836-4171-8f02-990e4046f802", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802'' moved to trashcan
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 20 KiB/s wr, 2 op/s
Dec 04 10:41:57 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 04 10:41:57 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cda2cc19-4836-4171-8f02-990e4046f802", "format": "json"}]: dispatch
Dec 04 10:41:57 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cda2cc19-4836-4171-8f02-990e4046f802", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:57 compute-0 ceph-mon[75358]: pgmap v952: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 20 KiB/s wr, 2 op/s
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:41:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:41:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:41:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:41:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:41:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:41:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 5 op/s
Dec 04 10:41:59 compute-0 ceph-mon[75358]: pgmap v953: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 5 op/s
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "format": "json"}]: dispatch
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3f91f1e-db38-4937-881a-6c033198bb16, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3f91f1e-db38-4937-881a-6c033198bb16, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:41:59 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:59.490+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b3f91f1e-db38-4937-881a-6c033198bb16' of type subvolume
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b3f91f1e-db38-4937-881a-6c033198bb16' of type subvolume
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "force": true, "format": "json"}]: dispatch
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16'' moved to trashcan
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/c8609094-ece1-462b-9a3d-54c307953629'.
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp'
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp' to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta'
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "format": "json"}]: dispatch
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:41:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:41:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:41:59 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:00 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "format": "json"}]: dispatch
Dec 04 10:42:00 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:00 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:00 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "format": "json"}]: dispatch
Dec 04 10:42:00 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "format": "json"}]: dispatch
Dec 04 10:42:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bdae8876-925e-4534-9c67-ead7c1879e8c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bdae8876-925e-4534-9c67-ead7c1879e8c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:00 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:00.737+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bdae8876-925e-4534-9c67-ead7c1879e8c' of type subvolume
Dec 04 10:42:00 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bdae8876-925e-4534-9c67-ead7c1879e8c' of type subvolume
Dec 04 10:42:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec 04 10:42:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c'' moved to trashcan
Dec 04 10:42:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:42:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec 04 10:42:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 5 op/s
Dec 04 10:42:01 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "format": "json"}]: dispatch
Dec 04 10:42:01 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:01 compute-0 ceph-mon[75358]: pgmap v954: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 5 op/s
Dec 04 10:42:01 compute-0 podman[251765]: 2025-12-04 10:42:01.950895522 +0000 UTC m=+0.064277606 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 04 10:42:03 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "snap_name": "5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68", "format": "json"}]: dispatch
Dec 04 10:42:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:42:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 46 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 72 KiB/s wr, 9 op/s
Dec 04 10:42:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:42:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "snap_name": "5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68", "format": "json"}]: dispatch
Dec 04 10:42:03 compute-0 ceph-mon[75358]: pgmap v955: 321 pgs: 321 active+clean; 46 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 72 KiB/s wr, 9 op/s
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e/1fa680d2-c7e3-4b92-8668-e87a35555293'.
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e/.meta.tmp'
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e/.meta.tmp' to config b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e/.meta'
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec 04 10:42:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "format": "json"}]: dispatch
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec 04 10:42:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:42:04 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:04 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:04 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "format": "json"}]: dispatch
Dec 04 10:42:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "format": "json"}]: dispatch
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:04 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:04.812+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e221725b-e6e8-4c35-9638-fa0fd11665ad' of type subvolume
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e221725b-e6e8-4c35-9638-fa0fd11665ad' of type subvolume
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad'' moved to trashcan
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:42:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec 04 10:42:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 46 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 53 KiB/s wr, 6 op/s
Dec 04 10:42:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "format": "json"}]: dispatch
Dec 04 10:42:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:05 compute-0 ceph-mon[75358]: pgmap v956: 321 pgs: 321 active+clean; 46 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 53 KiB/s wr, 6 op/s
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 46 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 53 KiB/s wr, 7 op/s
Dec 04 10:42:07 compute-0 ceph-mon[75358]: pgmap v957: 321 pgs: 321 active+clean; 46 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 53 KiB/s wr, 7 op/s
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "snap_name": "5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68_67473b26-a211-4250-90a1-ca773f3435a0", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68_67473b26-a211-4250-90a1-ca773f3435a0, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp'
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp' to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta'
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68_67473b26-a211-4250-90a1-ca773f3435a0, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "snap_name": "5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp'
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp' to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta'
Dec 04 10:42:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:42:08 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "format": "json"}]: dispatch
Dec 04 10:42:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:08 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:08.217+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3d22f483-2196-4c24-a6e8-b6086bc6989e' of type subvolume
Dec 04 10:42:08 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3d22f483-2196-4c24-a6e8-b6086bc6989e' of type subvolume
Dec 04 10:42:08 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec 04 10:42:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e'' moved to trashcan
Dec 04 10:42:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:42:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec 04 10:42:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "snap_name": "5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68_67473b26-a211-4250-90a1-ca773f3435a0", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "snap_name": "5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "format": "json"}]: dispatch
Dec 04 10:42:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 73 KiB/s wr, 10 op/s
Dec 04 10:42:09 compute-0 ceph-mon[75358]: pgmap v958: 321 pgs: 321 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 73 KiB/s wr, 10 op/s
Dec 04 10:42:09 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:42:09.544 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:42:09 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:42:09.545 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:42:10 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:42:10.548 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:42:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 04 10:42:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 04 10:42:10 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "format": "json"}]: dispatch
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:11 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:11.278+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '481aa727-f970-4ad9-94c6-ca9f61924fb8' of type subvolume
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '481aa727-f970-4ad9-94c6-ca9f61924fb8' of type subvolume
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8'' moved to trashcan
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec 04 10:42:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 53 KiB/s wr, 8 op/s
Dec 04 10:42:11 compute-0 ceph-mon[75358]: osdmap e137: 3 total, 3 up, 3 in
Dec 04 10:42:11 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "format": "json"}]: dispatch
Dec 04 10:42:11 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:11 compute-0 ceph-mon[75358]: pgmap v960: 321 pgs: 321 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 53 KiB/s wr, 8 op/s
Dec 04 10:42:11 compute-0 podman[251787]: 2025-12-04 10:42:11.941988401 +0000 UTC m=+0.049525679 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 04 10:42:11 compute-0 podman[251786]: 2025-12-04 10:42:11.972361485 +0000 UTC m=+0.083195685 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 04 10:42:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 51 KiB/s wr, 8 op/s
Dec 04 10:42:13 compute-0 ceph-mon[75358]: pgmap v961: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 51 KiB/s wr, 8 op/s
Dec 04 10:42:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 04 10:42:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 04 10:42:14 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 04 10:42:15 compute-0 ceph-mon[75358]: osdmap e138: 3 total, 3 up, 3 in
Dec 04 10:42:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 63 KiB/s wr, 10 op/s
Dec 04 10:42:16 compute-0 ceph-mon[75358]: pgmap v963: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 63 KiB/s wr, 10 op/s
Dec 04 10:42:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:42:16 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8'.
Dec 04 10:42:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/.meta.tmp'
Dec 04 10:42:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/.meta.tmp' to config b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/.meta'
Dec 04 10:42:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:42:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "format": "json"}]: dispatch
Dec 04 10:42:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:42:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:42:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:42:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 33 KiB/s wr, 5 op/s
Dec 04 10:42:17 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:18 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:18 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "format": "json"}]: dispatch
Dec 04 10:42:18 compute-0 ceph-mon[75358]: pgmap v964: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 33 KiB/s wr, 5 op/s
Dec 04 10:42:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 702 B/s rd, 42 KiB/s wr, 7 op/s
Dec 04 10:42:19 compute-0 ceph-mon[75358]: pgmap v965: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 702 B/s rd, 42 KiB/s wr, 7 op/s
Dec 04 10:42:20 compute-0 sshd-session[251830]: Invalid user azureuser from 74.249.218.27 port 42944
Dec 04 10:42:20 compute-0 sshd-session[251830]: Received disconnect from 74.249.218.27 port 42944:11: Bye Bye [preauth]
Dec 04 10:42:20 compute-0 sshd-session[251830]: Disconnected from invalid user azureuser 74.249.218.27 port 42944 [preauth]
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8'.
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/.meta.tmp'
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/.meta.tmp' to config b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/.meta'
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "format": "json"}]: dispatch
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:42:21 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:21 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 36 KiB/s wr, 6 op/s
Dec 04 10:42:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "format": "json"}]: dispatch
Dec 04 10:42:22 compute-0 ceph-mon[75358]: pgmap v966: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 36 KiB/s wr, 6 op/s
Dec 04 10:42:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 3 op/s
Dec 04 10:42:23 compute-0 ceph-mon[75358]: pgmap v967: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 3 op/s
Dec 04 10:42:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:42:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:42:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:42:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:24 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec 04 10:42:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:42:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:42:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:42:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:42:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:42:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:42:25 compute-0 sudo[251833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:42:25 compute-0 sudo[251833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:42:25 compute-0 sudo[251833]: pam_unix(sudo:session): session closed for user root
Dec 04 10:42:25 compute-0 sudo[251858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:42:25 compute-0 sudo[251858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:42:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 184 B/s rd, 20 KiB/s wr, 3 op/s
Dec 04 10:42:25 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:42:25 compute-0 ceph-mon[75358]: pgmap v968: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 184 B/s rd, 20 KiB/s wr, 3 op/s
Dec 04 10:42:25 compute-0 sudo[251858]: pam_unix(sudo:session): session closed for user root
Dec 04 10:42:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:42:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:42:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:42:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:42:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:42:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:42:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:42:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:42:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:42:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:42:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:42:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:42:25 compute-0 sudo[251914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:42:25 compute-0 sudo[251914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:42:25 compute-0 sudo[251914]: pam_unix(sudo:session): session closed for user root
Dec 04 10:42:26 compute-0 sudo[251939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:42:26 compute-0 sudo[251939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:42:26 compute-0 podman[251978]: 2025-12-04 10:42:26.290735847 +0000 UTC m=+0.042996727 container create 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:42:26 compute-0 systemd[1]: Started libpod-conmon-715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f.scope.
Dec 04 10:42:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:42:26 compute-0 podman[251978]: 2025-12-04 10:42:26.271056099 +0000 UTC m=+0.023316979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:42:26 compute-0 podman[251978]: 2025-12-04 10:42:26.372149338 +0000 UTC m=+0.124410218 container init 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:42:26 compute-0 podman[251978]: 2025-12-04 10:42:26.379567362 +0000 UTC m=+0.131828222 container start 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:42:26 compute-0 podman[251978]: 2025-12-04 10:42:26.383264844 +0000 UTC m=+0.135525834 container attach 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:42:26 compute-0 wizardly_gates[251994]: 167 167
Dec 04 10:42:26 compute-0 systemd[1]: libpod-715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f.scope: Deactivated successfully.
Dec 04 10:42:26 compute-0 podman[251978]: 2025-12-04 10:42:26.386181556 +0000 UTC m=+0.138442416 container died 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:42:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-227262a48362c5240ed6104526cb846d7bc14489a5dbd5f9d6a34e9bdbf0bd02-merged.mount: Deactivated successfully.
Dec 04 10:42:26 compute-0 podman[251978]: 2025-12-04 10:42:26.426854635 +0000 UTC m=+0.179115485 container remove 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:42:26 compute-0 systemd[1]: libpod-conmon-715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f.scope: Deactivated successfully.
Dec 04 10:42:26 compute-0 podman[252018]: 2025-12-04 10:42:26.582688512 +0000 UTC m=+0.042112046 container create 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:42:26 compute-0 systemd[1]: Started libpod-conmon-9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3.scope.
Dec 04 10:42:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:26 compute-0 podman[252018]: 2025-12-04 10:42:26.564798798 +0000 UTC m=+0.024222352 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:42:26 compute-0 podman[252018]: 2025-12-04 10:42:26.668531711 +0000 UTC m=+0.127955275 container init 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 04 10:42:26 compute-0 podman[252018]: 2025-12-04 10:42:26.675220918 +0000 UTC m=+0.134644452 container start 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 04 10:42:26 compute-0 podman[252018]: 2025-12-04 10:42:26.678280703 +0000 UTC m=+0.137704237 container attach 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:42:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:42:26
Dec 04 10:42:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:42:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:42:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'backups']
Dec 04 10:42:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:42:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:42:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:42:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:42:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:42:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:42:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:42:27 compute-0 friendly_noyce[252034]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:42:27 compute-0 friendly_noyce[252034]: --> All data devices are unavailable
Dec 04 10:42:27 compute-0 systemd[1]: libpod-9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3.scope: Deactivated successfully.
Dec 04 10:42:27 compute-0 podman[252018]: 2025-12-04 10:42:27.161707108 +0000 UTC m=+0.621130642 container died 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39-merged.mount: Deactivated successfully.
Dec 04 10:42:27 compute-0 podman[252018]: 2025-12-04 10:42:27.201608459 +0000 UTC m=+0.661031993 container remove 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:42:27 compute-0 systemd[1]: libpod-conmon-9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3.scope: Deactivated successfully.
Dec 04 10:42:27 compute-0 sudo[251939]: pam_unix(sudo:session): session closed for user root
Dec 04 10:42:27 compute-0 sudo[252068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:42:27 compute-0 sudo[252068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:42:27 compute-0 sudo[252068]: pam_unix(sudo:session): session closed for user root
Dec 04 10:42:27 compute-0 sudo[252093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:42:27 compute-0 sudo[252093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:42:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 19 KiB/s wr, 3 op/s
Dec 04 10:42:27 compute-0 podman[252130]: 2025-12-04 10:42:27.660969657 +0000 UTC m=+0.041089240 container create d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:42:27 compute-0 systemd[1]: Started libpod-conmon-d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd.scope.
Dec 04 10:42:27 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:42:27 compute-0 podman[252130]: 2025-12-04 10:42:27.723389846 +0000 UTC m=+0.103509439 container init d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 04 10:42:27 compute-0 podman[252130]: 2025-12-04 10:42:27.73001234 +0000 UTC m=+0.110131923 container start d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:42:27 compute-0 agitated_stonebraker[252147]: 167 167
Dec 04 10:42:27 compute-0 podman[252130]: 2025-12-04 10:42:27.734214134 +0000 UTC m=+0.114333737 container attach d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 04 10:42:27 compute-0 systemd[1]: libpod-d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd.scope: Deactivated successfully.
Dec 04 10:42:27 compute-0 podman[252130]: 2025-12-04 10:42:27.735367223 +0000 UTC m=+0.115486836 container died d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:42:27 compute-0 podman[252130]: 2025-12-04 10:42:27.642663613 +0000 UTC m=+0.022783216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0557a3c135958f1e0d02f228ba121d81f2df5ccf0080fb2ecf5bb932a25b6712-merged.mount: Deactivated successfully.
Dec 04 10:42:27 compute-0 podman[252130]: 2025-12-04 10:42:27.775670563 +0000 UTC m=+0.155790156 container remove d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:42:27 compute-0 systemd[1]: libpod-conmon-d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd.scope: Deactivated successfully.
Dec 04 10:42:27 compute-0 sshd-session[251832]: Invalid user supermaint from 101.47.163.20 port 54200
Dec 04 10:42:27 compute-0 ceph-mon[75358]: pgmap v969: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 19 KiB/s wr, 3 op/s
Dec 04 10:42:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:42:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:42:27 compute-0 podman[252170]: 2025-12-04 10:42:27.959640868 +0000 UTC m=+0.060448361 container create e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:42:28 compute-0 systemd[1]: Started libpod-conmon-e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d.scope.
Dec 04 10:42:28 compute-0 podman[252170]: 2025-12-04 10:42:27.941334913 +0000 UTC m=+0.042142426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:42:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:28 compute-0 podman[252170]: 2025-12-04 10:42:28.058833439 +0000 UTC m=+0.159640962 container init e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:42:28 compute-0 podman[252170]: 2025-12-04 10:42:28.065238128 +0000 UTC m=+0.166045621 container start e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:42:28 compute-0 podman[252170]: 2025-12-04 10:42:28.068758775 +0000 UTC m=+0.169566268 container attach e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:42:28 compute-0 sshd-session[251832]: Received disconnect from 101.47.163.20 port 54200:11: Bye Bye [preauth]
Dec 04 10:42:28 compute-0 sshd-session[251832]: Disconnected from invalid user supermaint 101.47.163.20 port 54200 [preauth]
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:42:28 compute-0 silly_germain[252184]: {
Dec 04 10:42:28 compute-0 silly_germain[252184]:     "0": [
Dec 04 10:42:28 compute-0 silly_germain[252184]:         {
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "devices": [
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "/dev/loop3"
Dec 04 10:42:28 compute-0 silly_germain[252184]:             ],
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_name": "ceph_lv0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_size": "21470642176",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "name": "ceph_lv0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "tags": {
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.cluster_name": "ceph",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.crush_device_class": "",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.encrypted": "0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.objectstore": "bluestore",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.osd_id": "0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.type": "block",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.vdo": "0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.with_tpm": "0"
Dec 04 10:42:28 compute-0 silly_germain[252184]:             },
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "type": "block",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "vg_name": "ceph_vg0"
Dec 04 10:42:28 compute-0 silly_germain[252184]:         }
Dec 04 10:42:28 compute-0 silly_germain[252184]:     ],
Dec 04 10:42:28 compute-0 silly_germain[252184]:     "1": [
Dec 04 10:42:28 compute-0 silly_germain[252184]:         {
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "devices": [
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "/dev/loop4"
Dec 04 10:42:28 compute-0 silly_germain[252184]:             ],
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_name": "ceph_lv1",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_size": "21470642176",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "name": "ceph_lv1",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "tags": {
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.cluster_name": "ceph",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.crush_device_class": "",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.encrypted": "0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.objectstore": "bluestore",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.osd_id": "1",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.type": "block",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.vdo": "0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.with_tpm": "0"
Dec 04 10:42:28 compute-0 silly_germain[252184]:             },
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "type": "block",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "vg_name": "ceph_vg1"
Dec 04 10:42:28 compute-0 silly_germain[252184]:         }
Dec 04 10:42:28 compute-0 silly_germain[252184]:     ],
Dec 04 10:42:28 compute-0 silly_germain[252184]:     "2": [
Dec 04 10:42:28 compute-0 silly_germain[252184]:         {
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "devices": [
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "/dev/loop5"
Dec 04 10:42:28 compute-0 silly_germain[252184]:             ],
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_name": "ceph_lv2",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_size": "21470642176",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "name": "ceph_lv2",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "tags": {
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.cluster_name": "ceph",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.crush_device_class": "",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.encrypted": "0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.objectstore": "bluestore",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.osd_id": "2",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.type": "block",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.vdo": "0",
Dec 04 10:42:28 compute-0 silly_germain[252184]:                 "ceph.with_tpm": "0"
Dec 04 10:42:28 compute-0 silly_germain[252184]:             },
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "type": "block",
Dec 04 10:42:28 compute-0 silly_germain[252184]:             "vg_name": "ceph_vg2"
Dec 04 10:42:28 compute-0 silly_germain[252184]:         }
Dec 04 10:42:28 compute-0 silly_germain[252184]:     ]
Dec 04 10:42:28 compute-0 silly_germain[252184]: }
Dec 04 10:42:28 compute-0 systemd[1]: libpod-e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d.scope: Deactivated successfully.
Dec 04 10:42:28 compute-0 podman[252170]: 2025-12-04 10:42:28.351924932 +0000 UTC m=+0.452732425 container died e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6-merged.mount: Deactivated successfully.
Dec 04 10:42:28 compute-0 podman[252170]: 2025-12-04 10:42:28.388589352 +0000 UTC m=+0.489396845 container remove e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Dec 04 10:42:28 compute-0 systemd[1]: libpod-conmon-e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d.scope: Deactivated successfully.
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:28 compute-0 sudo[252093]: pam_unix(sudo:session): session closed for user root
Dec 04 10:42:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:42:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec 04 10:42:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:42:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8
Dec 04 10:42:28 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8],prefix=session evict} (starting...)
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:28 compute-0 sudo[252206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:42:28 compute-0 sudo[252206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:42:28 compute-0 sudo[252206]: pam_unix(sudo:session): session closed for user root
Dec 04 10:42:28 compute-0 sudo[252232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:42:28 compute-0 sudo[252232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "format": "json"}]: dispatch
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:28 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:28.588+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58ec2fca-4cd4-4393-9127-d135ebc9b908' of type subvolume
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58ec2fca-4cd4-4393-9127-d135ebc9b908' of type subvolume
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908'' moved to trashcan
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:42:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec 04 10:42:28 compute-0 podman[252268]: 2025-12-04 10:42:28.846609827 +0000 UTC m=+0.038840975 container create 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:42:28 compute-0 systemd[1]: Started libpod-conmon-563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1.scope.
Dec 04 10:42:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:42:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:42:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:42:28 compute-0 podman[252268]: 2025-12-04 10:42:28.908933532 +0000 UTC m=+0.101164720 container init 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:42:28 compute-0 podman[252268]: 2025-12-04 10:42:28.914751778 +0000 UTC m=+0.106982926 container start 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:42:28 compute-0 podman[252268]: 2025-12-04 10:42:28.918201273 +0000 UTC m=+0.110432471 container attach 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:42:28 compute-0 systemd[1]: libpod-563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1.scope: Deactivated successfully.
Dec 04 10:42:28 compute-0 peaceful_moore[252284]: 167 167
Dec 04 10:42:28 compute-0 podman[252268]: 2025-12-04 10:42:28.920382447 +0000 UTC m=+0.112613595 container died 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:42:28 compute-0 conmon[252284]: conmon 563be1ec6af2edf7d2cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1.scope/container/memory.events
Dec 04 10:42:28 compute-0 podman[252268]: 2025-12-04 10:42:28.830131357 +0000 UTC m=+0.022362525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6a04bdf779a6a4d7be7782df25b13490f608b1cc93dd4b38cce9c31596f9479-merged.mount: Deactivated successfully.
Dec 04 10:42:28 compute-0 podman[252268]: 2025-12-04 10:42:28.956427891 +0000 UTC m=+0.148659059 container remove 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:42:28 compute-0 systemd[1]: libpod-conmon-563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1.scope: Deactivated successfully.
Dec 04 10:42:29 compute-0 podman[252308]: 2025-12-04 10:42:29.106550637 +0000 UTC m=+0.042577648 container create 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:42:29 compute-0 systemd[1]: Started libpod-conmon-8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678.scope.
Dec 04 10:42:29 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:42:29 compute-0 podman[252308]: 2025-12-04 10:42:29.085426342 +0000 UTC m=+0.021453413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:42:29 compute-0 podman[252308]: 2025-12-04 10:42:29.184666414 +0000 UTC m=+0.120693455 container init 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:42:29 compute-0 podman[252308]: 2025-12-04 10:42:29.191908954 +0000 UTC m=+0.127935965 container start 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:42:29 compute-0 podman[252308]: 2025-12-04 10:42:29.197042522 +0000 UTC m=+0.133069563 container attach 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:42:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 32 KiB/s wr, 4 op/s
Dec 04 10:42:29 compute-0 lvm[252402]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:42:29 compute-0 lvm[252403]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:42:29 compute-0 lvm[252403]: VG ceph_vg1 finished
Dec 04 10:42:29 compute-0 lvm[252402]: VG ceph_vg0 finished
Dec 04 10:42:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "format": "json"}]: dispatch
Dec 04 10:42:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:29 compute-0 ceph-mon[75358]: pgmap v970: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 32 KiB/s wr, 4 op/s
Dec 04 10:42:29 compute-0 lvm[252405]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:42:29 compute-0 lvm[252405]: VG ceph_vg2 finished
Dec 04 10:42:30 compute-0 blissful_ganguly[252324]: {}
Dec 04 10:42:30 compute-0 systemd[1]: libpod-8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678.scope: Deactivated successfully.
Dec 04 10:42:30 compute-0 podman[252308]: 2025-12-04 10:42:30.036674056 +0000 UTC m=+0.972701077 container died 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:42:30 compute-0 systemd[1]: libpod-8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678.scope: Consumed 1.378s CPU time.
Dec 04 10:42:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383-merged.mount: Deactivated successfully.
Dec 04 10:42:30 compute-0 podman[252308]: 2025-12-04 10:42:30.083749343 +0000 UTC m=+1.019776364 container remove 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:42:30 compute-0 systemd[1]: libpod-conmon-8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678.scope: Deactivated successfully.
Dec 04 10:42:30 compute-0 sudo[252232]: pam_unix(sudo:session): session closed for user root
Dec 04 10:42:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:42:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:42:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:42:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:42:30 compute-0 sudo[252421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:42:30 compute-0 sudo[252421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:42:30 compute-0 sudo[252421]: pam_unix(sudo:session): session closed for user root
Dec 04 10:42:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:42:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s wr, 3 op/s
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0'.
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/.meta.tmp'
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/.meta.tmp' to config b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/.meta'
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "format": "json"}]: dispatch
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:42:31 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:32 compute-0 ceph-mon[75358]: pgmap v971: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s wr, 3 op/s
Dec 04 10:42:32 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:32 compute-0 podman[252446]: 2025-12-04 10:42:32.980213313 +0000 UTC m=+0.090651850 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:42:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "format": "json"}]: dispatch
Dec 04 10:42:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 48 KiB/s wr, 6 op/s
Dec 04 10:42:34 compute-0 ceph-mon[75358]: pgmap v972: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 48 KiB/s wr, 6 op/s
Dec 04 10:42:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:42:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:42:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:42:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:35 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec 04 10:42:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:42:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:42:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:42:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:42:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:42:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:42:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:42:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:42:36 compute-0 ceph-mon[75358]: pgmap v973: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671073559850907 of space, bias 1.0, pg target 0.2001322067955272 quantized to 32 (current 32)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00010206047783933782 of space, bias 4.0, pg target 0.12247257340720538 quantized to 16 (current 32)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:42:37 compute-0 sshd-session[252467]: Invalid user ionadmin from 107.175.213.239 port 58444
Dec 04 10:42:37 compute-0 sshd-session[252467]: Received disconnect from 107.175.213.239 port 58444:11: Bye Bye [preauth]
Dec 04 10:42:37 compute-0 sshd-session[252467]: Disconnected from invalid user ionadmin 107.175.213.239 port 58444 [preauth]
Dec 04 10:42:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:42:37 compute-0 ceph-mon[75358]: pgmap v974: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:42:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec 04 10:42:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:42:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0
Dec 04 10:42:39 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0],prefix=session evict} (starting...)
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:42:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:42:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "format": "json"}]: dispatch
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5da5d86-f585-431a-b524-b52c13853cdd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5da5d86-f585-431a-b524-b52c13853cdd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:39 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:39.328+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5da5d86-f585-431a-b524-b52c13853cdd' of type subvolume
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5da5d86-f585-431a-b524-b52c13853cdd' of type subvolume
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd'' moved to trashcan
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec 04 10:42:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 50 KiB/s wr, 6 op/s
Dec 04 10:42:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "format": "json"}]: dispatch
Dec 04 10:42:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:40 compute-0 ceph-mon[75358]: pgmap v975: 321 pgs: 321 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 50 KiB/s wr, 6 op/s
Dec 04 10:42:41 compute-0 nova_compute[244644]: 2025-12-04 10:42:41.340 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:42:41 compute-0 nova_compute[244644]: 2025-12-04 10:42:41.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:42:41 compute-0 nova_compute[244644]: 2025-12-04 10:42:41.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:42:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Dec 04 10:42:41 compute-0 nova_compute[244644]: 2025-12-04 10:42:41.429 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:42:41 compute-0 ceph-mon[75358]: pgmap v976: 321 pgs: 321 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Dec 04 10:42:42 compute-0 nova_compute[244644]: 2025-12-04 10:42:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:42:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:42 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf'.
Dec 04 10:42:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/.meta.tmp'
Dec 04 10:42:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/.meta.tmp' to config b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/.meta'
Dec 04 10:42:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "format": "json"}]: dispatch
Dec 04 10:42:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:42:42 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:42 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:42 compute-0 podman[252471]: 2025-12-04 10:42:42.952034585 +0000 UTC m=+0.054764600 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 04 10:42:42 compute-0 podman[252470]: 2025-12-04 10:42:42.983359213 +0000 UTC m=+0.087159504 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 04 10:42:43 compute-0 nova_compute[244644]: 2025-12-04 10:42:43.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:42:43 compute-0 nova_compute[244644]: 2025-12-04 10:42:43.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:42:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 8 op/s
Dec 04 10:42:43 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:43 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "format": "json"}]: dispatch
Dec 04 10:42:43 compute-0 ceph-mon[75358]: pgmap v977: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 8 op/s
Dec 04 10:42:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:44 compute-0 nova_compute[244644]: 2025-12-04 10:42:44.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:42:44 compute-0 nova_compute[244644]: 2025-12-04 10:42:44.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:42:44 compute-0 nova_compute[244644]: 2025-12-04 10:42:44.371 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:42:44 compute-0 nova_compute[244644]: 2025-12-04 10:42:44.371 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:42:44 compute-0 nova_compute[244644]: 2025-12-04 10:42:44.371 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:42:44 compute-0 nova_compute[244644]: 2025-12-04 10:42:44.371 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:42:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:42:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139802679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:42:44 compute-0 nova_compute[244644]: 2025-12-04 10:42:44.911 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:42:44 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/139802679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.955998) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844964956036, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1633, "num_deletes": 257, "total_data_size": 2250600, "memory_usage": 2279296, "flush_reason": "Manual Compaction"}
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844964974702, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2214525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19657, "largest_seqno": 21289, "table_properties": {"data_size": 2207086, "index_size": 4189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17739, "raw_average_key_size": 20, "raw_value_size": 2191305, "raw_average_value_size": 2581, "num_data_blocks": 188, "num_entries": 849, "num_filter_entries": 849, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844839, "oldest_key_time": 1764844839, "file_creation_time": 1764844964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 18755 microseconds, and 6490 cpu microseconds.
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.974745) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2214525 bytes OK
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.974788) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.976939) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.976961) EVENT_LOG_v1 {"time_micros": 1764844964976954, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.976980) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2243208, prev total WAL file size 2243208, number of live WAL files 2.
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.977707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2162KB)], [47(7165KB)]
Dec 04 10:42:44 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844964977795, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9551718, "oldest_snapshot_seqno": -1}
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4611 keys, 7772554 bytes, temperature: kUnknown
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844965032241, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7772554, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7741017, "index_size": 18883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 114458, "raw_average_key_size": 24, "raw_value_size": 7657118, "raw_average_value_size": 1660, "num_data_blocks": 788, "num_entries": 4611, "num_filter_entries": 4611, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.032621) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7772554 bytes
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.034209) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.0 rd, 142.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.8) write-amplify(3.5) OK, records in: 5146, records dropped: 535 output_compression: NoCompression
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.034226) EVENT_LOG_v1 {"time_micros": 1764844965034217, "job": 24, "event": "compaction_finished", "compaction_time_micros": 54588, "compaction_time_cpu_micros": 20320, "output_level": 6, "num_output_files": 1, "total_output_size": 7772554, "num_input_records": 5146, "num_output_records": 4611, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844965034667, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844965035947, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.977594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.035996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.036002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.036003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.036004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:42:45 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.036006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.065 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.066 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5048MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.067 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.067 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.130 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.131 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.153 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:42:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:42:45 compute-0 sshd-session[252511]: Invalid user ubuntu from 103.149.86.230 port 58290
Dec 04 10:42:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:42:45 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1914684475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.705 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.711 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.740 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.742 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:42:45 compute-0 nova_compute[244644]: 2025-12-04 10:42:45.742 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:42:45 compute-0 sshd-session[252511]: Received disconnect from 103.149.86.230 port 58290:11: Bye Bye [preauth]
Dec 04 10:42:45 compute-0 sshd-session[252511]: Disconnected from invalid user ubuntu 103.149.86.230 port 58290 [preauth]
Dec 04 10:42:45 compute-0 ceph-mon[75358]: pgmap v978: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:42:45 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1914684475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:42:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:42:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:42:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:42:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:46 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec 04 10:42:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:42:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:42:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:42:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:42:46 compute-0 sshd-session[252533]: Invalid user centos from 183.82.97.80 port 40200
Dec 04 10:42:46 compute-0 nova_compute[244644]: 2025-12-04 10:42:46.738 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:42:46 compute-0 nova_compute[244644]: 2025-12-04 10:42:46.738 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:42:46 compute-0 nova_compute[244644]: 2025-12-04 10:42:46.739 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:42:46 compute-0 nova_compute[244644]: 2025-12-04 10:42:46.739 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:42:46 compute-0 nova_compute[244644]: 2025-12-04 10:42:46.739 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:42:46 compute-0 sshd-session[252533]: Connection closed by invalid user centos 183.82.97.80 port 40200 [preauth]
Dec 04 10:42:46 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:42:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:42:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:42:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:42:47 compute-0 ceph-mon[75358]: pgmap v979: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 5 op/s
Dec 04 10:42:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 6 op/s
Dec 04 10:42:49 compute-0 ceph-mon[75358]: pgmap v980: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 6 op/s
Dec 04 10:42:49 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:42:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec 04 10:42:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:42:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:42:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:49 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf
Dec 04 10:42:49 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf],prefix=session evict} (starting...)
Dec 04 10:42:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:42:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "format": "json"}]: dispatch
Dec 04 10:42:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d7ec0481-b957-40a8-acf9-4ac33a165908, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d7ec0481-b957-40a8-acf9-4ac33a165908, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:50 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:50.045+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7ec0481-b957-40a8-acf9-4ac33a165908' of type subvolume
Dec 04 10:42:50 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7ec0481-b957-40a8-acf9-4ac33a165908' of type subvolume
Dec 04 10:42:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908'' moved to trashcan
Dec 04 10:42:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:42:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec 04 10:42:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:42:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:42:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "format": "json"}]: dispatch
Dec 04 10:42:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 5 op/s
Dec 04 10:42:51 compute-0 ceph-mon[75358]: pgmap v981: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 5 op/s
Dec 04 10:42:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:42:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:42:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:42:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:53 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec 04 10:42:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:42:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:42:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:42:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:42:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:42:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:42:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 61 KiB/s wr, 7 op/s
Dec 04 10:42:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec 04 10:42:54 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944/12901b2f-0604-4ca2-8ff9-99a77556cca5'.
Dec 04 10:42:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944/.meta.tmp'
Dec 04 10:42:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944/.meta.tmp' to config b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944/.meta'
Dec 04 10:42:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec 04 10:42:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "format": "json"}]: dispatch
Dec 04 10:42:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec 04 10:42:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec 04 10:42:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:42:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:42:54 compute-0 ceph-mon[75358]: pgmap v982: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 61 KiB/s wr, 7 op/s
Dec 04 10:42:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:42:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:42:54.908 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:42:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:42:54.909 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:42:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:42:54.909 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:42:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:42:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "format": "json"}]: dispatch
Dec 04 10:42:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 4 op/s
Dec 04 10:42:56 compute-0 ceph-mon[75358]: pgmap v983: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 4 op/s
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 5 op/s
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:42:57 compute-0 ceph-mon[75358]: pgmap v984: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 5 op/s
Dec 04 10:42:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:42:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec 04 10:42:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:42:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8
Dec 04 10:42:57 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8],prefix=session evict} (starting...)
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:42:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:42:58 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:42:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:42:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:42:58 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "format": "json"}]: dispatch
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:42:58 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:58.621+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '24d9d739-98c3-41b3-9e91-5fbf698f4944' of type subvolume
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '24d9d739-98c3-41b3-9e91-5fbf698f4944' of type subvolume
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944'' moved to trashcan
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:42:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec 04 10:42:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:42:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 8 op/s
Dec 04 10:42:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "format": "json"}]: dispatch
Dec 04 10:42:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "force": true, "format": "json"}]: dispatch
Dec 04 10:42:59 compute-0 ceph-mon[75358]: pgmap v985: 321 pgs: 321 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 8 op/s
Dec 04 10:43:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:43:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:43:01 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:01 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec 04 10:43:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:43:01 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:01 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:43:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 47 KiB/s wr, 6 op/s
Dec 04 10:43:01 compute-0 sshd-session[252561]: Received disconnect from 217.154.62.22 port 33940:11: Bye Bye [preauth]
Dec 04 10:43:01 compute-0 sshd-session[252561]: Disconnected from authenticating user root 217.154.62.22 port 33940 [preauth]
Dec 04 10:43:02 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec 04 10:43:02 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37/9a911faf-043c-4c37-9142-455a2d8f4429'.
Dec 04 10:43:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37/.meta.tmp'
Dec 04 10:43:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37/.meta.tmp' to config b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37/.meta'
Dec 04 10:43:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec 04 10:43:02 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "format": "json"}]: dispatch
Dec 04 10:43:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec 04 10:43:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec 04 10:43:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:02 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:02 compute-0 ceph-mon[75358]: pgmap v986: 321 pgs: 321 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 47 KiB/s wr, 6 op/s
Dec 04 10:43:02 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "format": "json"}]: dispatch
Dec 04 10:43:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 78 KiB/s wr, 10 op/s
Dec 04 10:43:03 compute-0 podman[252563]: 2025-12-04 10:43:03.959702031 +0000 UTC m=+0.062867441 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:43:04 compute-0 ceph-mon[75358]: pgmap v987: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 78 KiB/s wr, 10 op/s
Dec 04 10:43:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:43:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec 04 10:43:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:43:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:43:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8
Dec 04 10:43:04 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8],prefix=session evict} (starting...)
Dec 04 10:43:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:43:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:43:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:43:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 57 KiB/s wr, 7 op/s
Dec 04 10:43:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "format": "json"}]: dispatch
Dec 04 10:43:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:06 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:06.154+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '97fc4d92-2e4d-40fb-86bf-ef965853aa37' of type subvolume
Dec 04 10:43:06 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '97fc4d92-2e4d-40fb-86bf-ef965853aa37' of type subvolume
Dec 04 10:43:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec 04 10:43:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37'' moved to trashcan
Dec 04 10:43:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:43:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec 04 10:43:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:06 compute-0 ceph-mon[75358]: pgmap v988: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 57 KiB/s wr, 7 op/s
Dec 04 10:43:07 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "format": "json"}]: dispatch
Dec 04 10:43:07 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 57 KiB/s wr, 8 op/s
Dec 04 10:43:08 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:43:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:43:08 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:08 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec 04 10:43:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:43:08 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:08 compute-0 ceph-mon[75358]: pgmap v989: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 57 KiB/s wr, 8 op/s
Dec 04 10:43:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:08 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:43:09 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:09 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:09 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 10 op/s
Dec 04 10:43:10 compute-0 sshd-session[252584]: Invalid user administrator from 103.230.176.152 port 58878
Dec 04 10:43:10 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:43:10.248 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:43:10 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:43:10.250 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:43:10 compute-0 ceph-mon[75358]: pgmap v990: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 10 op/s
Dec 04 10:43:10 compute-0 sshd-session[252584]: Connection closed by invalid user administrator 103.230.176.152 port 58878 [preauth]
Dec 04 10:43:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 8 op/s
Dec 04 10:43:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:43:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1768623676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:43:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:43:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1768623676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:43:11 compute-0 ceph-mon[75358]: pgmap v991: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 8 op/s
Dec 04 10:43:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1768623676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:43:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1768623676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:43:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:43:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec 04 10:43:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:43:12 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:43:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8
Dec 04 10:43:12 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8],prefix=session evict} (starting...)
Dec 04 10:43:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:43:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:43:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:43:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 11 op/s
Dec 04 10:43:13 compute-0 ceph-mon[75358]: pgmap v992: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 11 op/s
Dec 04 10:43:13 compute-0 sshd-session[252587]: Invalid user oracle from 103.179.218.243 port 43086
Dec 04 10:43:13 compute-0 podman[252590]: 2025-12-04 10:43:13.603679489 +0000 UTC m=+0.052822862 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 04 10:43:13 compute-0 podman[252589]: 2025-12-04 10:43:13.650961923 +0000 UTC m=+0.104921994 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 04 10:43:13 compute-0 sshd-session[252587]: Received disconnect from 103.179.218.243 port 43086:11: Bye Bye [preauth]
Dec 04 10:43:13 compute-0 sshd-session[252587]: Disconnected from invalid user oracle 103.179.218.243 port 43086 [preauth]
Dec 04 10:43:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 7 op/s
Dec 04 10:43:15 compute-0 ceph-mon[75358]: pgmap v993: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 7 op/s
Dec 04 10:43:15 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:15 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:43:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:43:15 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:15 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec 04 10:43:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:43:15 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:15 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:15 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 04 10:43:15 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec 04 10:43:16 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:16 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:17 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:43:17.252 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:43:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 8 op/s
Dec 04 10:43:17 compute-0 ceph-mon[75358]: pgmap v994: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 8 op/s
Dec 04 10:43:18 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:18 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/e2160647-d792-4be6-83e3-0a77d5539fd0'.
Dec 04 10:43:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp'
Dec 04 10:43:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp' to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta'
Dec 04 10:43:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:18 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "format": "json"}]: dispatch
Dec 04 10:43:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:18 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:18 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:18 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:19 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:19 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec 04 10:43:19 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec 04 10:43:19 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:43:19 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:43:19 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:19 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:19 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:19 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8
Dec 04 10:43:19 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8],prefix=session evict} (starting...)
Dec 04 10:43:19 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:43:19 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 9 op/s
Dec 04 10:43:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "format": "json"}]: dispatch
Dec 04 10:43:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:19 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec 04 10:43:19 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec 04 10:43:19 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:43:19 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec 04 10:43:19 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:43:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec 04 10:43:19 compute-0 ceph-mon[75358]: pgmap v995: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 9 op/s
Dec 04 10:43:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec 04 10:43:20 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7/a711eac5-4d18-4be8-8bdb-d9f7a5922442'.
Dec 04 10:43:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7/.meta.tmp'
Dec 04 10:43:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7/.meta.tmp' to config b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7/.meta'
Dec 04 10:43:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec 04 10:43:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "format": "json"}]: dispatch
Dec 04 10:43:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec 04 10:43:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec 04 10:43:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:20 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:20 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:20 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "format": "json"}]: dispatch
Dec 04 10:43:20 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 45 KiB/s wr, 7 op/s
Dec 04 10:43:21 compute-0 ceph-mon[75358]: pgmap v996: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 45 KiB/s wr, 7 op/s
Dec 04 10:43:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "snap_name": "6f1499c3-6375-4ad6-94a0-953306cf2d1f", "format": "json"}]: dispatch
Dec 04 10:43:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "snap_name": "6f1499c3-6375-4ad6-94a0-953306cf2d1f", "format": "json"}]: dispatch
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "format": "json"}]: dispatch
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee821ced-1eec-43e8-af63-bd95973cd67b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee821ced-1eec-43e8-af63-bd95973cd67b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:23.147+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee821ced-1eec-43e8-af63-bd95973cd67b' of type subvolume
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee821ced-1eec-43e8-af63-bd95973cd67b' of type subvolume
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b'' moved to trashcan
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 10 op/s
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af/251c0bc7-d836-4231-96c8-6099843232d7'.
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af/.meta.tmp'
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af/.meta.tmp' to config b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af/.meta'
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "format": "json"}]: dispatch
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec 04 10:43:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec 04 10:43:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "format": "json"}]: dispatch
Dec 04 10:43:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:23 compute-0 ceph-mon[75358]: pgmap v997: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 10 op/s
Dec 04 10:43:23 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:24 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:24 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "format": "json"}]: dispatch
Dec 04 10:43:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 43 KiB/s wr, 6 op/s
Dec 04 10:43:25 compute-0 ceph-mon[75358]: pgmap v998: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 43 KiB/s wr, 6 op/s
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "snap_name": "6f1499c3-6375-4ad6-94a0-953306cf2d1f_d1f5a442-8701-446e-ae89-917b6794340b", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f_d1f5a442-8701-446e-ae89-917b6794340b, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp'
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp' to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta'
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f_d1f5a442-8701-446e-ae89-917b6794340b, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "snap_name": "6f1499c3-6375-4ad6-94a0-953306cf2d1f", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp'
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp' to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta'
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:43:26
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'cephfs.cephfs.meta']
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "format": "json"}]: dispatch
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:efb32910-eddf-42fc-9d2f-7022478fa2af, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:efb32910-eddf-42fc-9d2f-7022478fa2af, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:26 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:26.945+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'efb32910-eddf-42fc-9d2f-7022478fa2af' of type subvolume
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'efb32910-eddf-42fc-9d2f-7022478fa2af' of type subvolume
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af'' moved to trashcan
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:43:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec 04 10:43:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 43 KiB/s wr, 6 op/s
Dec 04 10:43:27 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "snap_name": "6f1499c3-6375-4ad6-94a0-953306cf2d1f_d1f5a442-8701-446e-ae89-917b6794340b", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:27 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "snap_name": "6f1499c3-6375-4ad6-94a0-953306cf2d1f", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:27 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "format": "json"}]: dispatch
Dec 04 10:43:27 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:27 compute-0 ceph-mon[75358]: pgmap v999: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 43 KiB/s wr, 6 op/s
Dec 04 10:43:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:43:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:43:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:43:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 73 KiB/s wr, 9 op/s
Dec 04 10:43:29 compute-0 ceph-mon[75358]: pgmap v1000: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 73 KiB/s wr, 9 op/s
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "format": "json"}]: dispatch
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:30 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:30.140+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1276f5c4-3479-4622-a6c1-a1fd0508feb3' of type subvolume
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1276f5c4-3479-4622-a6c1-a1fd0508feb3' of type subvolume
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3'' moved to trashcan
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec 04 10:43:30 compute-0 sudo[252636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:43:30 compute-0 sudo[252636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:43:30 compute-0 sudo[252636]: pam_unix(sudo:session): session closed for user root
Dec 04 10:43:30 compute-0 sudo[252661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:43:30 compute-0 sudo[252661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:43:30 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "format": "json"}]: dispatch
Dec 04 10:43:30 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "format": "json"}]: dispatch
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:30 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:30.591+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dba135ca-99df-42d3-a2b3-b27ad79995b7' of type subvolume
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dba135ca-99df-42d3-a2b3-b27ad79995b7' of type subvolume
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7'' moved to trashcan
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:43:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec 04 10:43:30 compute-0 sudo[252661]: pam_unix(sudo:session): session closed for user root
Dec 04 10:43:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:43:30 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:43:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:43:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:43:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:43:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:43:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:43:30 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:43:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:43:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:43:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:43:30 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:43:31 compute-0 sudo[252717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:43:31 compute-0 sudo[252717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:43:31 compute-0 sudo[252717]: pam_unix(sudo:session): session closed for user root
Dec 04 10:43:31 compute-0 sudo[252742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:43:31 compute-0 sudo[252742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:43:31 compute-0 podman[252780]: 2025-12-04 10:43:31.350018167 +0000 UTC m=+0.052183516 container create ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:43:31 compute-0 systemd[1]: Started libpod-conmon-ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d.scope.
Dec 04 10:43:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 57 KiB/s wr, 7 op/s
Dec 04 10:43:31 compute-0 podman[252780]: 2025-12-04 10:43:31.330456502 +0000 UTC m=+0.032621831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:43:31 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:43:31 compute-0 podman[252780]: 2025-12-04 10:43:31.444377118 +0000 UTC m=+0.146542437 container init ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:43:31 compute-0 podman[252780]: 2025-12-04 10:43:31.451140496 +0000 UTC m=+0.153305805 container start ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:43:31 compute-0 podman[252780]: 2025-12-04 10:43:31.455199157 +0000 UTC m=+0.157364466 container attach ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:43:31 compute-0 romantic_cori[252796]: 167 167
Dec 04 10:43:31 compute-0 systemd[1]: libpod-ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d.scope: Deactivated successfully.
Dec 04 10:43:31 compute-0 conmon[252796]: conmon ff1d1cf0e4f1fd5c4937 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d.scope/container/memory.events
Dec 04 10:43:31 compute-0 podman[252780]: 2025-12-04 10:43:31.45975898 +0000 UTC m=+0.161924289 container died ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbc2c7fc9435236e73ee9799e57dc82fc6d68180647fae5fee31f0adb8b985a5-merged.mount: Deactivated successfully.
Dec 04 10:43:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 04 10:43:31 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "format": "json"}]: dispatch
Dec 04 10:43:31 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:43:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:43:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:43:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:43:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:43:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:43:31 compute-0 ceph-mon[75358]: pgmap v1001: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 57 KiB/s wr, 7 op/s
Dec 04 10:43:31 compute-0 podman[252780]: 2025-12-04 10:43:31.498544793 +0000 UTC m=+0.200710112 container remove ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:43:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 04 10:43:31 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 04 10:43:31 compute-0 systemd[1]: libpod-conmon-ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d.scope: Deactivated successfully.
Dec 04 10:43:31 compute-0 podman[252822]: 2025-12-04 10:43:31.69992016 +0000 UTC m=+0.041263436 container create 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 04 10:43:31 compute-0 systemd[1]: Started libpod-conmon-274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009.scope.
Dec 04 10:43:31 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:31 compute-0 podman[252822]: 2025-12-04 10:43:31.769337321 +0000 UTC m=+0.110680627 container init 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:43:31 compute-0 podman[252822]: 2025-12-04 10:43:31.777433002 +0000 UTC m=+0.118776288 container start 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:43:31 compute-0 podman[252822]: 2025-12-04 10:43:31.682665151 +0000 UTC m=+0.024008457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:43:31 compute-0 podman[252822]: 2025-12-04 10:43:31.781500244 +0000 UTC m=+0.122843530 container attach 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:43:32 compute-0 goofy_turing[252838]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:43:32 compute-0 goofy_turing[252838]: --> All data devices are unavailable
Dec 04 10:43:32 compute-0 systemd[1]: libpod-274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009.scope: Deactivated successfully.
Dec 04 10:43:32 compute-0 podman[252822]: 2025-12-04 10:43:32.255770142 +0000 UTC m=+0.597113428 container died 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591-merged.mount: Deactivated successfully.
Dec 04 10:43:32 compute-0 podman[252822]: 2025-12-04 10:43:32.299170659 +0000 UTC m=+0.640513945 container remove 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:43:32 compute-0 systemd[1]: libpod-conmon-274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009.scope: Deactivated successfully.
Dec 04 10:43:32 compute-0 sudo[252742]: pam_unix(sudo:session): session closed for user root
Dec 04 10:43:32 compute-0 sudo[252870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:43:32 compute-0 sudo[252870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:43:32 compute-0 sudo[252870]: pam_unix(sudo:session): session closed for user root
Dec 04 10:43:32 compute-0 sudo[252895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:43:32 compute-0 sudo[252895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:43:32 compute-0 ceph-mon[75358]: osdmap e139: 3 total, 3 up, 3 in
Dec 04 10:43:32 compute-0 podman[252931]: 2025-12-04 10:43:32.766961996 +0000 UTC m=+0.040453345 container create b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:43:32 compute-0 systemd[1]: Started libpod-conmon-b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe.scope.
Dec 04 10:43:32 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:43:32 compute-0 podman[252931]: 2025-12-04 10:43:32.74940779 +0000 UTC m=+0.022899169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:43:32 compute-0 podman[252931]: 2025-12-04 10:43:32.84530956 +0000 UTC m=+0.118800919 container init b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:43:32 compute-0 podman[252931]: 2025-12-04 10:43:32.85177775 +0000 UTC m=+0.125269099 container start b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:43:32 compute-0 podman[252931]: 2025-12-04 10:43:32.855226996 +0000 UTC m=+0.128718345 container attach b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:43:32 compute-0 peaceful_torvalds[252948]: 167 167
Dec 04 10:43:32 compute-0 systemd[1]: libpod-b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe.scope: Deactivated successfully.
Dec 04 10:43:32 compute-0 podman[252931]: 2025-12-04 10:43:32.85905591 +0000 UTC m=+0.132547279 container died b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa1ae07b434d91ce95445ac4aba202752cbf83048dccc252c3788af9729b879d-merged.mount: Deactivated successfully.
Dec 04 10:43:32 compute-0 podman[252931]: 2025-12-04 10:43:32.897823743 +0000 UTC m=+0.171315092 container remove b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:43:32 compute-0 systemd[1]: libpod-conmon-b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe.scope: Deactivated successfully.
Dec 04 10:43:33 compute-0 podman[252970]: 2025-12-04 10:43:33.057133256 +0000 UTC m=+0.043520961 container create 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:43:33 compute-0 systemd[1]: Started libpod-conmon-25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082.scope.
Dec 04 10:43:33 compute-0 podman[252970]: 2025-12-04 10:43:33.036494833 +0000 UTC m=+0.022882558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:43:33 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:33 compute-0 podman[252970]: 2025-12-04 10:43:33.159195458 +0000 UTC m=+0.145583193 container init 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:43:33 compute-0 podman[252970]: 2025-12-04 10:43:33.167250658 +0000 UTC m=+0.153638363 container start 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:43:33 compute-0 podman[252970]: 2025-12-04 10:43:33.170597131 +0000 UTC m=+0.156984836 container attach 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:43:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 61 KiB/s wr, 8 op/s
Dec 04 10:43:33 compute-0 jovial_darwin[252986]: {
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:     "0": [
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:         {
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "devices": [
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "/dev/loop3"
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             ],
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_name": "ceph_lv0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_size": "21470642176",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "name": "ceph_lv0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "tags": {
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.cluster_name": "ceph",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.crush_device_class": "",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.encrypted": "0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.objectstore": "bluestore",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.osd_id": "0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.type": "block",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.vdo": "0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.with_tpm": "0"
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             },
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "type": "block",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "vg_name": "ceph_vg0"
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:         }
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:     ],
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:     "1": [
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:         {
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "devices": [
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "/dev/loop4"
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             ],
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_name": "ceph_lv1",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_size": "21470642176",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "name": "ceph_lv1",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "tags": {
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.cluster_name": "ceph",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.crush_device_class": "",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.encrypted": "0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.objectstore": "bluestore",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.osd_id": "1",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.type": "block",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.vdo": "0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.with_tpm": "0"
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             },
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "type": "block",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "vg_name": "ceph_vg1"
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:         }
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:     ],
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:     "2": [
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:         {
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "devices": [
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "/dev/loop5"
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             ],
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_name": "ceph_lv2",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_size": "21470642176",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "name": "ceph_lv2",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "tags": {
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.cluster_name": "ceph",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.crush_device_class": "",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.encrypted": "0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.objectstore": "bluestore",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.osd_id": "2",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.type": "block",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.vdo": "0",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:                 "ceph.with_tpm": "0"
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             },
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "type": "block",
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:             "vg_name": "ceph_vg2"
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:         }
Dec 04 10:43:33 compute-0 jovial_darwin[252986]:     ]
Dec 04 10:43:33 compute-0 jovial_darwin[252986]: }
Dec 04 10:43:33 compute-0 systemd[1]: libpod-25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082.scope: Deactivated successfully.
Dec 04 10:43:33 compute-0 podman[252970]: 2025-12-04 10:43:33.475008854 +0000 UTC m=+0.461396569 container died 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 04 10:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e-merged.mount: Deactivated successfully.
Dec 04 10:43:33 compute-0 ceph-mon[75358]: pgmap v1003: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 61 KiB/s wr, 8 op/s
Dec 04 10:43:33 compute-0 podman[252970]: 2025-12-04 10:43:33.521159469 +0000 UTC m=+0.507547164 container remove 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 04 10:43:33 compute-0 systemd[1]: libpod-conmon-25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082.scope: Deactivated successfully.
Dec 04 10:43:33 compute-0 sudo[252895]: pam_unix(sudo:session): session closed for user root
Dec 04 10:43:33 compute-0 sudo[253006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:43:33 compute-0 sudo[253006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:43:33 compute-0 sudo[253006]: pam_unix(sudo:session): session closed for user root
Dec 04 10:43:33 compute-0 sudo[253031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:43:33 compute-0 sudo[253031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:43:33 compute-0 podman[253068]: 2025-12-04 10:43:33.968076918 +0000 UTC m=+0.039102230 container create eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:43:33 compute-0 systemd[1]: Started libpod-conmon-eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327.scope.
Dec 04 10:43:34 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:43:34 compute-0 podman[253068]: 2025-12-04 10:43:34.028454097 +0000 UTC m=+0.099479439 container init eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:43:34 compute-0 podman[253068]: 2025-12-04 10:43:34.034664311 +0000 UTC m=+0.105689623 container start eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 04 10:43:34 compute-0 podman[253068]: 2025-12-04 10:43:34.038916916 +0000 UTC m=+0.109942228 container attach eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:43:34 compute-0 optimistic_kirch[253086]: 167 167
Dec 04 10:43:34 compute-0 systemd[1]: libpod-eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327.scope: Deactivated successfully.
Dec 04 10:43:34 compute-0 podman[253068]: 2025-12-04 10:43:34.041529902 +0000 UTC m=+0.112555224 container died eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:43:34 compute-0 podman[253068]: 2025-12-04 10:43:33.950986565 +0000 UTC m=+0.022011897 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-11550c334abdca8a354a3398de999e06b7e2d0be81b6a3197590b1494ab667b4-merged.mount: Deactivated successfully.
Dec 04 10:43:34 compute-0 podman[253082]: 2025-12-04 10:43:34.078086328 +0000 UTC m=+0.075697459 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec 04 10:43:34 compute-0 podman[253068]: 2025-12-04 10:43:34.084703283 +0000 UTC m=+0.155728595 container remove eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:43:34 compute-0 systemd[1]: libpod-conmon-eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327.scope: Deactivated successfully.
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e'.
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/.meta.tmp'
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/.meta.tmp' to config b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/.meta'
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "format": "json"}]: dispatch
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:34 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:34 compute-0 podman[253128]: 2025-12-04 10:43:34.243009291 +0000 UTC m=+0.043322096 container create af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "cbe47551-19d7-448d-b120-9e300aa25c97", "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:cbe47551-19d7-448d-b120-9e300aa25c97, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 04 10:43:34 compute-0 systemd[1]: Started libpod-conmon-af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b.scope.
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:cbe47551-19d7-448d-b120-9e300aa25c97, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 04 10:43:34 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:43:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:34 compute-0 podman[253128]: 2025-12-04 10:43:34.226179643 +0000 UTC m=+0.026492478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:43:34 compute-0 podman[253128]: 2025-12-04 10:43:34.331842855 +0000 UTC m=+0.132155690 container init af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:43:34 compute-0 podman[253128]: 2025-12-04 10:43:34.33969625 +0000 UTC m=+0.140009065 container start af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 04 10:43:34 compute-0 podman[253128]: 2025-12-04 10:43:34.344172011 +0000 UTC m=+0.144484836 container attach af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:43:34 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:34 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "format": "json"}]: dispatch
Dec 04 10:43:34 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:34 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "cbe47551-19d7-448d-b120-9e300aa25c97", "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "cbe47551-19d7-448d-b120-9e300aa25c97", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:cbe47551-19d7-448d-b120-9e300aa25c97, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:cbe47551-19d7-448d-b120-9e300aa25c97, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve49", "tenant_id": "7e0c9a3966b443c7bbb289ba33849550", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec 04 10:43:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Dec 04 10:43:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 04 10:43:34 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID eve49 with tenant 7e0c9a3966b443c7bbb289ba33849550
Dec 04 10:43:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:43:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:34 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec 04 10:43:35 compute-0 lvm[253223]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:43:35 compute-0 lvm[253224]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:43:35 compute-0 lvm[253223]: VG ceph_vg0 finished
Dec 04 10:43:35 compute-0 lvm[253224]: VG ceph_vg1 finished
Dec 04 10:43:35 compute-0 lvm[253226]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:43:35 compute-0 lvm[253226]: VG ceph_vg2 finished
Dec 04 10:43:35 compute-0 awesome_sutherland[253145]: {}
Dec 04 10:43:35 compute-0 systemd[1]: libpod-af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b.scope: Deactivated successfully.
Dec 04 10:43:35 compute-0 systemd[1]: libpod-af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b.scope: Consumed 1.433s CPU time.
Dec 04 10:43:35 compute-0 podman[253128]: 2025-12-04 10:43:35.206187952 +0000 UTC m=+1.006500797 container died af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:43:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311-merged.mount: Deactivated successfully.
Dec 04 10:43:35 compute-0 podman[253128]: 2025-12-04 10:43:35.256994782 +0000 UTC m=+1.057307597 container remove af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:43:35 compute-0 systemd[1]: libpod-conmon-af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b.scope: Deactivated successfully.
Dec 04 10:43:35 compute-0 sudo[253031]: pam_unix(sudo:session): session closed for user root
Dec 04 10:43:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:43:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:43:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:43:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:43:35 compute-0 sudo[253242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:43:35 compute-0 sudo[253242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:43:35 compute-0 sudo[253242]: pam_unix(sudo:session): session closed for user root
Dec 04 10:43:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 61 KiB/s wr, 8 op/s
Dec 04 10:43:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a007c67c-4b9e-45ce-9f08-f1379750eb54", "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:a007c67c-4b9e-45ce-9f08-f1379750eb54, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 04 10:43:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:a007c67c-4b9e-45ce-9f08-f1379750eb54, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 04 10:43:35 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "cbe47551-19d7-448d-b120-9e300aa25c97", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:35 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve49", "tenant_id": "7e0c9a3966b443c7bbb289ba33849550", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 04 10:43:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:43:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:43:35 compute-0 ceph-mon[75358]: pgmap v1004: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 61 KiB/s wr, 8 op/s
Dec 04 10:43:36 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:43:36 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd'.
Dec 04 10:43:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/.meta.tmp'
Dec 04 10:43:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/.meta.tmp' to config b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/.meta'
Dec 04 10:43:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:43:36 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "format": "json"}]: dispatch
Dec 04 10:43:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:43:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:43:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671164381540543 of space, bias 1.0, pg target 0.2001349314462163 quantized to 32 (current 32)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0001505944399253973 of space, bias 4.0, pg target 0.18071332791047676 quantized to 16 (current 32)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:43:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a007c67c-4b9e-45ce-9f08-f1379750eb54", "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:37 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 61 KiB/s wr, 8 op/s
Dec 04 10:43:38 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:38 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "format": "json"}]: dispatch
Dec 04 10:43:38 compute-0 ceph-mon[75358]: pgmap v1005: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 61 KiB/s wr, 8 op/s
Dec 04 10:43:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve48", "tenant_id": "7e0c9a3966b443c7bbb289ba33849550", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec 04 10:43:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Dec 04 10:43:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 04 10:43:38 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID eve48 with tenant 7e0c9a3966b443c7bbb289ba33849550
Dec 04 10:43:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:43:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec 04 10:43:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a007c67c-4b9e-45ce-9f08-f1379750eb54", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a007c67c-4b9e-45ce-9f08-f1379750eb54, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 04 10:43:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a007c67c-4b9e-45ce-9f08-f1379750eb54, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "376dc4db-618b-4da3-9877-daf0c0185878", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878/cc465653-3416-44ed-bf17-d6453499d24f'.
Dec 04 10:43:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec 04 10:43:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878/.meta.tmp'
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878/.meta.tmp' to config b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878/.meta'
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec 04 10:43:39 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "376dc4db-618b-4da3-9877-daf0c0185878", "format": "json"}]: dispatch
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec 04 10:43:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:39 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 04 10:43:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:39 compute-0 ceph-mon[75358]: osdmap e140: 3 total, 3 up, 3 in
Dec 04 10:43:39 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 51 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 75 KiB/s wr, 10 op/s
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40'.
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/.meta.tmp'
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/.meta.tmp' to config b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/.meta'
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "format": "json"}]: dispatch
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:43:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:40 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve48", "tenant_id": "7e0c9a3966b443c7bbb289ba33849550", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a007c67c-4b9e-45ce-9f08-f1379750eb54", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "376dc4db-618b-4da3-9877-daf0c0185878", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "376dc4db-618b-4da3-9877-daf0c0185878", "format": "json"}]: dispatch
Dec 04 10:43:40 compute-0 ceph-mon[75358]: pgmap v1007: 321 pgs: 321 active+clean; 51 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 75 KiB/s wr, 10 op/s
Dec 04 10:43:40 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "auth_id": "Joe", "tenant_id": "d831ca1755a740e7819c02d320ecd2a0", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec 04 10:43:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec 04 10:43:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 04 10:43:40 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID Joe with tenant d831ca1755a740e7819c02d320ecd2a0
Dec 04 10:43:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:43:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec 04 10:43:41 compute-0 nova_compute[244644]: 2025-12-04 10:43:41.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:41 compute-0 nova_compute[244644]: 2025-12-04 10:43:41.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:43:41 compute-0 nova_compute[244644]: 2025-12-04 10:43:41.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:43:41 compute-0 nova_compute[244644]: 2025-12-04 10:43:41.355 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:43:41 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:41 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "format": "json"}]: dispatch
Dec 04 10:43:41 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 04 10:43:41 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:41 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 51 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 825 B/s rd, 61 KiB/s wr, 8 op/s
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Dec 04 10:43:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 04 10:43:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0)
Dec 04 10:43:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec 04 10:43:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e
Dec 04 10:43:42 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e],prefix=session evict} (starting...)
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:42 compute-0 nova_compute[244644]: 2025-12-04 10:43:42.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:42 compute-0 nova_compute[244644]: 2025-12-04 10:43:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:42 compute-0 nova_compute[244644]: 2025-12-04 10:43:42.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 04 10:43:42 compute-0 nova_compute[244644]: 2025-12-04 10:43:42.361 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 04 10:43:42 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "auth_id": "Joe", "tenant_id": "d831ca1755a740e7819c02d320ecd2a0", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:42 compute-0 ceph-mon[75358]: pgmap v1008: 321 pgs: 321 active+clean; 51 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 825 B/s rd, 61 KiB/s wr, 8 op/s
Dec 04 10:43:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 04 10:43:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec 04 10:43:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "376dc4db-618b-4da3-9877-daf0c0185878", "format": "json"}]: dispatch
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:376dc4db-618b-4da3-9877-daf0c0185878, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:376dc4db-618b-4da3-9877-daf0c0185878, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:42 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:42.893+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '376dc4db-618b-4da3-9877-daf0c0185878' of type subvolume
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '376dc4db-618b-4da3-9877-daf0c0185878' of type subvolume
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "376dc4db-618b-4da3-9877-daf0c0185878", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878'' moved to trashcan
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:43:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec 04 10:43:43 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 04 10:43:43 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 04 10:43:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 51 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 100 KiB/s wr, 12 op/s
Dec 04 10:43:43 compute-0 podman[253269]: 2025-12-04 10:43:43.946174559 +0000 UTC m=+0.056507963 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:43:43 compute-0 podman[253268]: 2025-12-04 10:43:43.980030398 +0000 UTC m=+0.090364872 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 04 10:43:44 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:44 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "376dc4db-618b-4da3-9877-daf0c0185878", "format": "json"}]: dispatch
Dec 04 10:43:44 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "376dc4db-618b-4da3-9877-daf0c0185878", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:44 compute-0 ceph-mon[75358]: pgmap v1009: 321 pgs: 321 active+clean; 51 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 100 KiB/s wr, 12 op/s
Dec 04 10:43:44 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5'.
Dec 04 10:43:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/.meta.tmp'
Dec 04 10:43:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/.meta.tmp' to config b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/.meta'
Dec 04 10:43:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:44 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "format": "json"}]: dispatch
Dec 04 10:43:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:44 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.361 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.362 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.362 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.362 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.414 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.415 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.415 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.415 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.416 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:43:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 51 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 100 KiB/s wr, 12 op/s
Dec 04 10:43:45 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:45 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "format": "json"}]: dispatch
Dec 04 10:43:45 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:45 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve47", "tenant_id": "7e0c9a3966b443c7bbb289ba33849550", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec 04 10:43:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Dec 04 10:43:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 04 10:43:45 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID eve47 with tenant 7e0c9a3966b443c7bbb289ba33849550
Dec 04 10:43:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:43:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec 04 10:43:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:43:45 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338810546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:43:45 compute-0 nova_compute[244644]: 2025-12-04 10:43:45.973 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.178 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.180 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5111MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.180 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.181 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.400 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.401 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:43:46 compute-0 ceph-mon[75358]: pgmap v1010: 321 pgs: 321 active+clean; 51 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 100 KiB/s wr, 12 op/s
Dec 04 10:43:46 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve47", "tenant_id": "7e0c9a3966b443c7bbb289ba33849550", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 04 10:43:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:46 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3338810546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.488 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing inventories for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 04 10:43:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec 04 10:43:46 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889/81816eed-4b43-43d9-9a2c-a8df9562f2c7'.
Dec 04 10:43:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889/.meta.tmp'
Dec 04 10:43:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889/.meta.tmp' to config b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889/.meta'
Dec 04 10:43:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec 04 10:43:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "format": "json"}]: dispatch
Dec 04 10:43:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec 04 10:43:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec 04 10:43:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:46 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.549 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating ProviderTree inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.549 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.580 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing aggregate associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.602 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing trait associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, traits: COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 04 10:43:46 compute-0 nova_compute[244644]: 2025-12-04 10:43:46.619 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:43:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:43:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2632855522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:43:47 compute-0 nova_compute[244644]: 2025-12-04 10:43:47.138 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:43:47 compute-0 nova_compute[244644]: 2025-12-04 10:43:47.144 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:43:47 compute-0 nova_compute[244644]: 2025-12-04 10:43:47.158 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:43:47 compute-0 nova_compute[244644]: 2025-12-04 10:43:47.159 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:43:47 compute-0 nova_compute[244644]: 2025-12-04 10:43:47.160 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:43:47 compute-0 nova_compute[244644]: 2025-12-04 10:43:47.160 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:47 compute-0 nova_compute[244644]: 2025-12-04 10:43:47.160 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 04 10:43:47 compute-0 nova_compute[244644]: 2025-12-04 10:43:47.170 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 51 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 100 KiB/s wr, 12 op/s
Dec 04 10:43:47 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:47 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "format": "json"}]: dispatch
Dec 04 10:43:47 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:47 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2632855522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:43:47 compute-0 ceph-mon[75358]: pgmap v1011: 321 pgs: 321 active+clean; 51 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 100 KiB/s wr, 12 op/s
Dec 04 10:43:47 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "Joe", "tenant_id": "c2e0964e5703431eab30fd7c235961ae", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:47 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec 04 10:43:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec 04 10:43:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 04 10:43:47 compute-0 ceph-mgr[75651]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Dec 04 10:43:47 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec 04 10:43:47 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:47.961+0000 7f8423c95640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Dec 04 10:43:47 compute-0 ceph-mgr[75651]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Dec 04 10:43:48 compute-0 nova_compute[244644]: 2025-12-04 10:43:48.150 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:48 compute-0 nova_compute[244644]: 2025-12-04 10:43:48.151 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:48 compute-0 nova_compute[244644]: 2025-12-04 10:43:48.181 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:48 compute-0 nova_compute[244644]: 2025-12-04 10:43:48.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:43:48 compute-0 nova_compute[244644]: 2025-12-04 10:43:48.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:43:48 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "Joe", "tenant_id": "c2e0964e5703431eab30fd7c235961ae", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 04 10:43:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 52 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 106 KiB/s wr, 13 op/s
Dec 04 10:43:49 compute-0 ceph-mon[75358]: pgmap v1012: 321 pgs: 321 active+clean; 52 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 106 KiB/s wr, 13 op/s
Dec 04 10:43:49 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 04 10:43:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Dec 04 10:43:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 04 10:43:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0)
Dec 04 10:43:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec 04 10:43:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e
Dec 04 10:43:50 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e],prefix=session evict} (starting...)
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "format": "json"}]: dispatch
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:50 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:50.153+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889' of type subvolume
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889' of type subvolume
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889'' moved to trashcan
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:43:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec 04 10:43:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 04 10:43:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 04 10:43:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec 04 10:43:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Dec 04 10:43:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 04 10:43:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "format": "json"}]: dispatch
Dec 04 10:43:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 52 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 90 KiB/s wr, 11 op/s
Dec 04 10:43:51 compute-0 ceph-mon[75358]: pgmap v1013: 321 pgs: 321 active+clean; 52 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 90 KiB/s wr, 11 op/s
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "tempest-cephx-id-1322111508", "tenant_id": "c2e0964e5703431eab30fd7c235961ae", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume authorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec 04 10:43:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} v 0)
Dec 04 10:43:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} : dispatch
Dec 04 10:43:52 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-1322111508 with tenant c2e0964e5703431eab30fd7c235961ae
Dec 04 10:43:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:43:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:52 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume authorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec 04 10:43:52 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "tempest-cephx-id-1322111508", "tenant_id": "c2e0964e5703431eab30fd7c235961ae", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:43:52 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} : dispatch
Dec 04 10:43:52 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:43:52 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/92c8c8fb-87d8-4b63-b6fb-001ecf8b1670'.
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "format": "json"}]: dispatch
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:43:52 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:43:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:43:52 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 52 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 128 KiB/s wr, 15 op/s
Dec 04 10:43:53 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:43:53 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "format": "json"}]: dispatch
Dec 04 10:43:53 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:43:53 compute-0 ceph-mon[75358]: pgmap v1014: 321 pgs: 321 active+clean; 52 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 128 KiB/s wr, 15 op/s
Dec 04 10:43:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Dec 04 10:43:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 04 10:43:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0)
Dec 04 10:43:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec 04 10:43:54 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e
Dec 04 10:43:54 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e],prefix=session evict} (starting...)
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 04 10:43:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 04 10:43:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec 04 10:43:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "format": "json"}]: dispatch
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:43:54 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:54.613+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dec20aa6-db73-446c-9d5e-8597f7adaaa8' of type subvolume
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dec20aa6-db73-446c-9d5e-8597f7adaaa8' of type subvolume
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8'' moved to trashcan
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:43:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec 04 10:43:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:43:54.910 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:43:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:43:54.910 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:43:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:43:54.910 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 52 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 75 KiB/s wr, 9 op/s
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '187ec7c1-10e2-40cd-bd3e-105526ebd065'
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 04 10:43:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "format": "json"}]: dispatch
Dec 04 10:43:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "force": true, "format": "json"}]: dispatch
Dec 04 10:43:55 compute-0 ceph-mon[75358]: pgmap v1015: 321 pgs: 321 active+clean; 52 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 75 KiB/s wr, 9 op/s
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5
Dec 04 10:43:55 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5],prefix=session evict} (starting...)
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:43:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "342109e9-178b-44e5-bf68-2605580aac2c", "format": "json"}]: dispatch
Dec 04 10:43:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:342109e9-178b-44e5-bf68-2605580aac2c, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:43:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:342109e9-178b-44e5-bf68-2605580aac2c, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:43:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 04 10:43:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 04 10:43:57 compute-0 sshd-session[253360]: Invalid user admin from 103.149.86.230 port 58236
Dec 04 10:43:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 52 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 75 KiB/s wr, 9 op/s
Dec 04 10:43:57 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "342109e9-178b-44e5-bf68-2605580aac2c", "format": "json"}]: dispatch
Dec 04 10:43:57 compute-0 ceph-mon[75358]: pgmap v1016: 321 pgs: 321 active+clean; 52 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 75 KiB/s wr, 9 op/s
Dec 04 10:43:57 compute-0 sshd-session[253360]: Received disconnect from 103.149.86.230 port 58236:11: Bye Bye [preauth]
Dec 04 10:43:57 compute-0 sshd-session[253360]: Disconnected from invalid user admin 103.149.86.230 port 58236 [preauth]
Dec 04 10:43:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:43:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:43:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:43:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:43:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:43:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:43:59 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "tempest-cephx-id-1322111508", "format": "json"}]: dispatch
Dec 04 10:43:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume deauthorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} v 0)
Dec 04 10:43:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} : dispatch
Dec 04 10:43:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"} v 0)
Dec 04 10:43:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"} : dispatch
Dec 04 10:43:59 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"}]': finished
Dec 04 10:43:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume deauthorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:59 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "tempest-cephx-id-1322111508", "format": "json"}]: dispatch
Dec 04 10:43:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume evict, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1322111508, client_metadata.root=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5
Dec 04 10:43:59 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-1322111508,client_metadata.root=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5],prefix=session evict} (starting...)
Dec 04 10:43:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:43:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume evict, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:43:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} : dispatch
Dec 04 10:43:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"} : dispatch
Dec 04 10:43:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"}]': finished
Dec 04 10:43:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:43:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 107 KiB/s wr, 13 op/s
Dec 04 10:44:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "a2731753-2916-43b4-aaed-f178c8b9ed48", "format": "json"}]: dispatch
Dec 04 10:44:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:00 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "tempest-cephx-id-1322111508", "format": "json"}]: dispatch
Dec 04 10:44:00 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "tempest-cephx-id-1322111508", "format": "json"}]: dispatch
Dec 04 10:44:00 compute-0 ceph-mon[75358]: pgmap v1017: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 107 KiB/s wr, 13 op/s
Dec 04 10:44:01 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "a2731753-2916-43b4-aaed-f178c8b9ed48", "format": "json"}]: dispatch
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 71 KiB/s wr, 9 op/s
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "a2731753-2916-43b4-aaed-f178c8b9ed48_6f006936-30e9-49da-a729-8953c011f3e4", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48_6f006936-30e9-49da-a729-8953c011f3e4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48_6f006936-30e9-49da-a729-8953c011f3e4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "a2731753-2916-43b4-aaed-f178c8b9ed48", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:02 compute-0 ceph-mon[75358]: pgmap v1018: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 71 KiB/s wr, 9 op/s
Dec 04 10:44:02 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 04 10:44:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:44:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec 04 10:44:02 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 04 10:44:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0)
Dec 04 10:44:02 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec 04 10:44:02 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Dec 04 10:44:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:44:02 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 04 10:44:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:44:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40
Dec 04 10:44:02 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40],prefix=session evict} (starting...)
Dec 04 10:44:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:44:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:44:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "a2731753-2916-43b4-aaed-f178c8b9ed48_6f006936-30e9-49da-a729-8953c011f3e4", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "a2731753-2916-43b4-aaed-f178c8b9ed48", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 04 10:44:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec 04 10:44:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Dec 04 10:44:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 104 KiB/s wr, 12 op/s
Dec 04 10:44:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:04 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 04 10:44:04 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 04 10:44:04 compute-0 ceph-mon[75358]: pgmap v1019: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 104 KiB/s wr, 12 op/s
Dec 04 10:44:04 compute-0 podman[253364]: 2025-12-04 10:44:04.96849321 +0000 UTC m=+0.068123800 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "eb780175-b147-4b28-95c7-37659a64381a", "format": "json"}]: dispatch
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb780175-b147-4b28-95c7-37659a64381a, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb780175-b147-4b28-95c7-37659a64381a, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 8 op/s
Dec 04 10:44:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "eb780175-b147-4b28-95c7-37659a64381a", "format": "json"}]: dispatch
Dec 04 10:44:05 compute-0 ceph-mon[75358]: pgmap v1020: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 8 op/s
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c'.
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/.meta.tmp'
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/.meta.tmp' to config b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/.meta'
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "format": "json"}]: dispatch
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:44:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:44:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:44:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4923 writes, 22K keys, 4923 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4923 writes, 4923 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1593 writes, 7340 keys, 1593 commit groups, 1.0 writes per commit group, ingest: 10.02 MB, 0.02 MB/s
                                           Interval WAL: 1593 writes, 1593 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.1      0.24              0.07        12    0.020       0      0       0.0       0.0
                                             L6      1/0    7.41 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    142.5    116.9      0.68              0.21        11    0.062     49K   5820       0.0       0.0
                                            Sum      1/0    7.41 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    105.4    113.5      0.92              0.28        23    0.040     49K   5820       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2    128.8    130.0      0.36              0.13        10    0.036     24K   2613       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    142.5    116.9      0.68              0.21        11    0.062     49K   5820       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    105.6      0.23              0.07        11    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.9 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56349f89b8d0#2 capacity: 304.00 MB usage: 9.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000202 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(566,8.89 MB,2.92528%) FilterBlock(24,143.17 KB,0.0459922%) IndexBlock(24,267.27 KB,0.0858558%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 04 10:44:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "admin", "tenant_id": "d831ca1755a740e7819c02d320ecd2a0", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec 04 10:44:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0)
Dec 04 10:44:06 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Dec 04 10:44:06 compute-0 ceph-mgr[75651]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec 04 10:44:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec 04 10:44:06 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:06.455+0000 7f8423c95640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec 04 10:44:06 compute-0 ceph-mgr[75651]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec 04 10:44:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec 04 10:44:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:44:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "format": "json"}]: dispatch
Dec 04 10:44:06 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:44:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Dec 04 10:44:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec 04 10:44:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec 04 10:44:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 78 KiB/s wr, 9 op/s
Dec 04 10:44:07 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "admin", "tenant_id": "d831ca1755a740e7819c02d320ecd2a0", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:07 compute-0 ceph-mon[75358]: osdmap e141: 3 total, 3 up, 3 in
Dec 04 10:44:07 compute-0 ceph-mon[75358]: pgmap v1022: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 78 KiB/s wr, 9 op/s
Dec 04 10:44:09 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "auth_id": "tempest-cephx-id-792738809", "tenant_id": "3ba683091e694bf1800f8fdcd57277cf", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume authorize, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, tenant_id:3ba683091e694bf1800f8fdcd57277cf, vol_name:cephfs) < ""
Dec 04 10:44:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} v 0)
Dec 04 10:44:09 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} : dispatch
Dec 04 10:44:09 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-792738809 with tenant 3ba683091e694bf1800f8fdcd57277cf
Dec 04 10:44:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:44:09 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:09 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume authorize, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, tenant_id:3ba683091e694bf1800f8fdcd57277cf, vol_name:cephfs) < ""
Dec 04 10:44:09 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} : dispatch
Dec 04 10:44:09 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:09 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 53 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 9 op/s
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "eb780175-b147-4b28-95c7-37659a64381a_7448b403-dc03-4f78-8d83-a6c5ad1ab7d7", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb780175-b147-4b28-95c7-37659a64381a_7448b403-dc03-4f78-8d83-a6c5ad1ab7d7, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb780175-b147-4b28-95c7-37659a64381a_7448b403-dc03-4f78-8d83-a6c5ad1ab7d7, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "eb780175-b147-4b28-95c7-37659a64381a", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb780175-b147-4b28-95c7-37659a64381a, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb780175-b147-4b28-95c7-37659a64381a, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "david", "tenant_id": "d831ca1755a740e7819c02d320ecd2a0", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec 04 10:44:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 04 10:44:10 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID david with tenant d831ca1755a740e7819c02d320ecd2a0
Dec 04 10:44:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:44:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "auth_id": "tempest-cephx-id-792738809", "tenant_id": "3ba683091e694bf1800f8fdcd57277cf", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:10 compute-0 ceph-mon[75358]: pgmap v1023: 321 pgs: 321 active+clean; 53 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 9 op/s
Dec 04 10:44:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 04 10:44:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "auth_id": "tempest-cephx-id-792738809", "format": "json"}]: dispatch
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume deauthorize, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} v 0)
Dec 04 10:44:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} : dispatch
Dec 04 10:44:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"} v 0)
Dec 04 10:44:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"} : dispatch
Dec 04 10:44:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"}]': finished
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume deauthorize, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "auth_id": "tempest-cephx-id-792738809", "format": "json"}]: dispatch
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume evict, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-792738809, client_metadata.root=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c
Dec 04 10:44:10 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-792738809,client_metadata.root=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c],prefix=session evict} (starting...)
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume evict, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "format": "json"}]: dispatch
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:10.817+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '485981c2-4d65-44e2-a4c4-d55efb5d64b6' of type subvolume
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '485981c2-4d65-44e2-a4c4-d55efb5d64b6' of type subvolume
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6'' moved to trashcan
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:44:10 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec 04 10:44:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec 04 10:44:11 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "eb780175-b147-4b28-95c7-37659a64381a_7448b403-dc03-4f78-8d83-a6c5ad1ab7d7", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:11 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "eb780175-b147-4b28-95c7-37659a64381a", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:11 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "david", "tenant_id": "d831ca1755a740e7819c02d320ecd2a0", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} : dispatch
Dec 04 10:44:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"} : dispatch
Dec 04 10:44:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"}]': finished
Dec 04 10:44:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec 04 10:44:11 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec 04 10:44:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:44:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2897176570' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:44:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:44:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2897176570' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:44:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 111 KiB/s wr, 11 op/s
Dec 04 10:44:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "auth_id": "tempest-cephx-id-792738809", "format": "json"}]: dispatch
Dec 04 10:44:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "auth_id": "tempest-cephx-id-792738809", "format": "json"}]: dispatch
Dec 04 10:44:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "format": "json"}]: dispatch
Dec 04 10:44:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:12 compute-0 ceph-mon[75358]: osdmap e142: 3 total, 3 up, 3 in
Dec 04 10:44:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2897176570' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:44:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2897176570' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:44:12 compute-0 ceph-mon[75358]: pgmap v1025: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 111 KiB/s wr, 11 op/s
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 126 KiB/s wr, 15 op/s
Dec 04 10:44:13 compute-0 ceph-mon[75358]: pgmap v1026: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 126 KiB/s wr, 15 op/s
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "d30d966b-f15f-4cb7-9d33-c43bf788f74f", "format": "json"}]: dispatch
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/06397792-300c-42d8-a6e6-8298e27470f5'.
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/.meta.tmp'
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/.meta.tmp' to config b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/.meta'
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "format": "json"}]: dispatch
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:44:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:44:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec 04 10:44:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec 04 10:44:14 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec 04 10:44:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "d30d966b-f15f-4cb7-9d33-c43bf788f74f", "format": "json"}]: dispatch
Dec 04 10:44:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:44:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "format": "json"}]: dispatch
Dec 04 10:44:14 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:44:14 compute-0 ceph-mon[75358]: osdmap e143: 3 total, 3 up, 3 in
Dec 04 10:44:14 compute-0 podman[253387]: 2025-12-04 10:44:14.955174981 +0000 UTC m=+0.059760325 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 04 10:44:14 compute-0 podman[253386]: 2025-12-04 10:44:14.992700922 +0000 UTC m=+0.096705041 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:44:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 126 KiB/s wr, 14 op/s
Dec 04 10:44:15 compute-0 ceph-mon[75358]: pgmap v1028: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 126 KiB/s wr, 14 op/s
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "auth_id": "david", "tenant_id": "c2e0964e5703431eab30fd7c235961ae", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec 04 10:44:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec 04 10:44:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec 04 10:44:17 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:17.272+0000 7f8423c95640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Dec 04 10:44:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 101 KiB/s wr, 10 op/s
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "d30d966b-f15f-4cb7-9d33-c43bf788f74f_cf03a839-1ccb-4948-9c00-9441d759b0d0", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f_cf03a839-1ccb-4948-9c00-9441d759b0d0, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f_cf03a839-1ccb-4948-9c00-9441d759b0d0, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "d30d966b-f15f-4cb7-9d33-c43bf788f74f", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:18 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "auth_id": "david", "tenant_id": "c2e0964e5703431eab30fd7c235961ae", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:18 compute-0 ceph-mon[75358]: pgmap v1029: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 101 KiB/s wr, 10 op/s
Dec 04 10:44:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "d30d966b-f15f-4cb7-9d33-c43bf788f74f_cf03a839-1ccb-4948-9c00-9441d759b0d0", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "d30d966b-f15f-4cb7-9d33-c43bf788f74f", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 507 B/s rd, 35 KiB/s wr, 7 op/s
Dec 04 10:44:20 compute-0 ceph-mon[75358]: pgmap v1030: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 507 B/s rd, 35 KiB/s wr, 7 op/s
Dec 04 10:44:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "auth_id": "david", "format": "json"}]: dispatch
Dec 04 10:44:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:20 compute-0 ceph-mgr[75651]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume 'bd4b4cb5-5fca-4376-8188-5f69aab6c36d'
Dec 04 10:44:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "auth_id": "david", "format": "json"}]: dispatch
Dec 04 10:44:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/06397792-300c-42d8-a6e6-8298e27470f5
Dec 04 10:44:20 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/06397792-300c-42d8-a6e6-8298e27470f5],prefix=session evict} (starting...)
Dec 04 10:44:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:44:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "e2969563-45cb-4ab6-812a-aad69d2395d4", "format": "json"}]: dispatch
Dec 04 10:44:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 62 KiB/s wr, 7 op/s
Dec 04 10:44:21 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "auth_id": "david", "format": "json"}]: dispatch
Dec 04 10:44:21 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "auth_id": "david", "format": "json"}]: dispatch
Dec 04 10:44:21 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "e2969563-45cb-4ab6-812a-aad69d2395d4", "format": "json"}]: dispatch
Dec 04 10:44:21 compute-0 ceph-mon[75358]: pgmap v1031: 321 pgs: 321 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 62 KiB/s wr, 7 op/s
Dec 04 10:44:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 50 KiB/s wr, 5 op/s
Dec 04 10:44:23 compute-0 ceph-mon[75358]: pgmap v1032: 321 pgs: 321 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 50 KiB/s wr, 5 op/s
Dec 04 10:44:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec 04 10:44:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec 04 10:44:24 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec 04 10:44:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "david", "format": "json"}]: dispatch
Dec 04 10:44:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:44:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec 04 10:44:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 04 10:44:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0)
Dec 04 10:44:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec 04 10:44:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Dec 04 10:44:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:44:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "david", "format": "json"}]: dispatch
Dec 04 10:44:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:44:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd
Dec 04 10:44:24 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd],prefix=session evict} (starting...)
Dec 04 10:44:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:44:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:44:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec 04 10:44:25 compute-0 ceph-mon[75358]: osdmap e144: 3 total, 3 up, 3 in
Dec 04 10:44:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 04 10:44:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec 04 10:44:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Dec 04 10:44:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec 04 10:44:25 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 04 10:44:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 42 KiB/s wr, 5 op/s
Dec 04 10:44:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "david", "format": "json"}]: dispatch
Dec 04 10:44:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "david", "format": "json"}]: dispatch
Dec 04 10:44:26 compute-0 ceph-mon[75358]: osdmap e145: 3 total, 3 up, 3 in
Dec 04 10:44:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:44:26
Dec 04 10:44:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:44:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:44:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log']
Dec 04 10:44:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "e2969563-45cb-4ab6-812a-aad69d2395d4_234ea353-60b9-4db4-8e91-5714a5b7ce6e", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4_234ea353-60b9-4db4-8e91-5714a5b7ce6e, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4_234ea353-60b9-4db4-8e91-5714a5b7ce6e, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "e2969563-45cb-4ab6-812a-aad69d2395d4", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:27 compute-0 ceph-mon[75358]: pgmap v1035: 321 pgs: 321 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 42 KiB/s wr, 5 op/s
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 76 KiB/s wr, 6 op/s
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba'.
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/.meta.tmp'
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/.meta.tmp' to config b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/.meta'
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "format": "json"}]: dispatch
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:44:27 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:44:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:44:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "e2969563-45cb-4ab6-812a-aad69d2395d4_234ea353-60b9-4db4-8e91-5714a5b7ce6e", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "e2969563-45cb-4ab6-812a-aad69d2395d4", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:28 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:44:28 compute-0 sshd-session[253434]: Invalid user posiflex from 217.154.62.22 port 46284
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:44:28 compute-0 sshd-session[253434]: Received disconnect from 217.154.62.22 port 46284:11: Bye Bye [preauth]
Dec 04 10:44:28 compute-0 sshd-session[253434]: Disconnected from invalid user posiflex 217.154.62.22 port 46284 [preauth]
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "format": "json"}]: dispatch
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:28 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:28.844+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd4b4cb5-5fca-4376-8188-5f69aab6c36d' of type subvolume
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd4b4cb5-5fca-4376-8188-5f69aab6c36d' of type subvolume
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d'' moved to trashcan
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:44:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec 04 10:44:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:29 compute-0 ceph-mon[75358]: pgmap v1036: 321 pgs: 321 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 76 KiB/s wr, 6 op/s
Dec 04 10:44:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:44:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "format": "json"}]: dispatch
Dec 04 10:44:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 35 KiB/s wr, 5 op/s
Dec 04 10:44:30 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "format": "json"}]: dispatch
Dec 04 10:44:30 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec 04 10:44:31 compute-0 ceph-mon[75358]: pgmap v1037: 321 pgs: 321 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 35 KiB/s wr, 5 op/s
Dec 04 10:44:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec 04 10:44:31 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec 04 10:44:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 143 B/s rd, 90 KiB/s wr, 7 op/s
Dec 04 10:44:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:44:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:44:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:44:31 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:44:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:44:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:44:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "7a27b9fe-c0b9-4c84-a258-8ecce5900f59", "format": "json"}]: dispatch
Dec 04 10:44:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:32 compute-0 ceph-mon[75358]: osdmap e146: 3 total, 3 up, 3 in
Dec 04 10:44:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:44:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "format": "json"}]: dispatch
Dec 04 10:44:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:32 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:32.606+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '187ec7c1-10e2-40cd-bd3e-105526ebd065' of type subvolume
Dec 04 10:44:32 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '187ec7c1-10e2-40cd-bd3e-105526ebd065' of type subvolume
Dec 04 10:44:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:44:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065'' moved to trashcan
Dec 04 10:44:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:44:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec 04 10:44:33 compute-0 ceph-mon[75358]: pgmap v1039: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 143 B/s rd, 90 KiB/s wr, 7 op/s
Dec 04 10:44:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "7a27b9fe-c0b9-4c84-a258-8ecce5900f59", "format": "json"}]: dispatch
Dec 04 10:44:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 631 B/s rd, 81 KiB/s wr, 9 op/s
Dec 04 10:44:33 compute-0 nova_compute[244644]: 2025-12-04 10:44:33.748 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec 04 10:44:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec 04 10:44:34 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec 04 10:44:34 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "format": "json"}]: dispatch
Dec 04 10:44:34 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:34 compute-0 ceph-mon[75358]: osdmap e147: 3 total, 3 up, 3 in
Dec 04 10:44:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:44:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:44:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:44:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 04 10:44:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:44:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:44:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:44:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:44:35 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:44:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:44:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 48 KiB/s wr, 7 op/s
Dec 04 10:44:35 compute-0 sudo[253437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:44:35 compute-0 sudo[253437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:44:35 compute-0 sudo[253437]: pam_unix(sudo:session): session closed for user root
Dec 04 10:44:35 compute-0 podman[253461]: 2025-12-04 10:44:35.570545256 +0000 UTC m=+0.059829486 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Dec 04 10:44:35 compute-0 sudo[253468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:44:35 compute-0 sudo[253468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:44:35 compute-0 ceph-mon[75358]: pgmap v1040: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 631 B/s rd, 81 KiB/s wr, 9 op/s
Dec 04 10:44:35 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:44:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:44:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:44:35 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:44:35 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:44:36 compute-0 sudo[253468]: pam_unix(sudo:session): session closed for user root
Dec 04 10:44:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:44:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:44:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:44:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:44:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:44:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:44:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:44:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:44:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:44:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:44:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:44:36 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:44:36 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "format": "json"}]: dispatch
Dec 04 10:44:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:36 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:36.272+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '264f5d7d-c08e-42d9-b63c-55452b2c5eef' of type subvolume
Dec 04 10:44:36 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '264f5d7d-c08e-42d9-b63c-55452b2c5eef' of type subvolume
Dec 04 10:44:36 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:44:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef'' moved to trashcan
Dec 04 10:44:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:44:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec 04 10:44:36 compute-0 sudo[253538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:44:36 compute-0 sudo[253538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:44:36 compute-0 sudo[253538]: pam_unix(sudo:session): session closed for user root
Dec 04 10:44:36 compute-0 sudo[253563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:44:36 compute-0 sudo[253563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:44:36 compute-0 podman[253600]: 2025-12-04 10:44:36.642384171 +0000 UTC m=+0.042083465 container create 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:44:36 compute-0 systemd[1]: Started libpod-conmon-87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68.scope.
Dec 04 10:44:36 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:44:36 compute-0 podman[253600]: 2025-12-04 10:44:36.624060317 +0000 UTC m=+0.023759641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:44:36 compute-0 podman[253600]: 2025-12-04 10:44:36.723862734 +0000 UTC m=+0.123562048 container init 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:44:36 compute-0 podman[253600]: 2025-12-04 10:44:36.731924453 +0000 UTC m=+0.131623737 container start 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 04 10:44:36 compute-0 podman[253600]: 2025-12-04 10:44:36.735248636 +0000 UTC m=+0.134947950 container attach 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 10:44:36 compute-0 elastic_meninsky[253616]: 167 167
Dec 04 10:44:36 compute-0 systemd[1]: libpod-87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68.scope: Deactivated successfully.
Dec 04 10:44:36 compute-0 podman[253600]: 2025-12-04 10:44:36.738503477 +0000 UTC m=+0.138202781 container died 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-890b9a7e75528141c9312b58b563ecee25ee39b775c0b8da1dfa899cae451f83-merged.mount: Deactivated successfully.
Dec 04 10:44:36 compute-0 podman[253600]: 2025-12-04 10:44:36.782287373 +0000 UTC m=+0.181986667 container remove 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:44:36 compute-0 systemd[1]: libpod-conmon-87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68.scope: Deactivated successfully.
Dec 04 10:44:36 compute-0 podman[253639]: 2025-12-04 10:44:36.973255872 +0000 UTC m=+0.042532797 container create b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:44:37 compute-0 systemd[1]: Started libpod-conmon-b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971.scope.
Dec 04 10:44:37 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:37 compute-0 podman[253639]: 2025-12-04 10:44:36.953707257 +0000 UTC m=+0.022984202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:44:37 compute-0 podman[253639]: 2025-12-04 10:44:37.061299567 +0000 UTC m=+0.130576522 container init b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:44:37 compute-0 podman[253639]: 2025-12-04 10:44:37.068602778 +0000 UTC m=+0.137879713 container start b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:44:37 compute-0 podman[253639]: 2025-12-04 10:44:37.072144115 +0000 UTC m=+0.141421040 container attach b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:44:37 compute-0 ceph-mon[75358]: pgmap v1042: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 48 KiB/s wr, 7 op/s
Dec 04 10:44:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:44:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:44:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:44:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:44:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:44:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:44:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "format": "json"}]: dispatch
Dec 04 10:44:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671892818066651 of space, bias 1.0, pg target 0.20015678454199953 quantized to 32 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00022318334157806733 of space, bias 4.0, pg target 0.26782000989368077 quantized to 16 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 7.630884938464543e-07 of space, bias 1.0, pg target 0.00022892654815393631 quantized to 32 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "7a27b9fe-c0b9-4c84-a258-8ecce5900f59_408e5d3e-e739-41e2-98d0-543f56b49908", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59_408e5d3e-e739-41e2-98d0-543f56b49908, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 127 KiB/s wr, 9 op/s
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59_408e5d3e-e739-41e2-98d0-543f56b49908, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "7a27b9fe-c0b9-4c84-a258-8ecce5900f59", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:37 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:37 compute-0 blissful_jemison[253656]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:44:37 compute-0 blissful_jemison[253656]: --> All data devices are unavailable
Dec 04 10:44:37 compute-0 systemd[1]: libpod-b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971.scope: Deactivated successfully.
Dec 04 10:44:37 compute-0 podman[253639]: 2025-12-04 10:44:37.523748821 +0000 UTC m=+0.593025746 container died b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019-merged.mount: Deactivated successfully.
Dec 04 10:44:37 compute-0 podman[253639]: 2025-12-04 10:44:37.566713437 +0000 UTC m=+0.635990362 container remove b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:44:37 compute-0 systemd[1]: libpod-conmon-b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971.scope: Deactivated successfully.
Dec 04 10:44:37 compute-0 sudo[253563]: pam_unix(sudo:session): session closed for user root
Dec 04 10:44:37 compute-0 sudo[253689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:44:37 compute-0 sudo[253689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:44:37 compute-0 sudo[253689]: pam_unix(sudo:session): session closed for user root
Dec 04 10:44:37 compute-0 sudo[253714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:44:37 compute-0 sudo[253714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:44:38 compute-0 podman[253751]: 2025-12-04 10:44:38.02463288 +0000 UTC m=+0.041202834 container create 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:44:38 compute-0 systemd[1]: Started libpod-conmon-3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0.scope.
Dec 04 10:44:38 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:44:38 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "7a27b9fe-c0b9-4c84-a258-8ecce5900f59_408e5d3e-e739-41e2-98d0-543f56b49908", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:38 compute-0 podman[253751]: 2025-12-04 10:44:38.006689454 +0000 UTC m=+0.023259428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:44:38 compute-0 podman[253751]: 2025-12-04 10:44:38.105079945 +0000 UTC m=+0.121649929 container init 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:44:38 compute-0 podman[253751]: 2025-12-04 10:44:38.11415418 +0000 UTC m=+0.130724134 container start 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:44:38 compute-0 podman[253751]: 2025-12-04 10:44:38.117799431 +0000 UTC m=+0.134369405 container attach 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:44:38 compute-0 funny_greider[253767]: 167 167
Dec 04 10:44:38 compute-0 systemd[1]: libpod-3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0.scope: Deactivated successfully.
Dec 04 10:44:38 compute-0 podman[253751]: 2025-12-04 10:44:38.121487113 +0000 UTC m=+0.138057067 container died 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cfeac85b3652f0646b2698ee37b9c31ae7f9e3e8c039a089e128c7c98dab6c3-merged.mount: Deactivated successfully.
Dec 04 10:44:38 compute-0 podman[253751]: 2025-12-04 10:44:38.158916751 +0000 UTC m=+0.175486705 container remove 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:44:38 compute-0 systemd[1]: libpod-conmon-3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0.scope: Deactivated successfully.
Dec 04 10:44:38 compute-0 podman[253791]: 2025-12-04 10:44:38.315497176 +0000 UTC m=+0.043238794 container create 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:44:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:44:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:44:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:44:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:44:38 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:44:38 compute-0 systemd[1]: Started libpod-conmon-5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50.scope.
Dec 04 10:44:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:44:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:38 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:38 compute-0 podman[253791]: 2025-12-04 10:44:38.295602693 +0000 UTC m=+0.023344341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:44:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:44:38 compute-0 podman[253791]: 2025-12-04 10:44:38.399221264 +0000 UTC m=+0.126962892 container init 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:44:38 compute-0 podman[253791]: 2025-12-04 10:44:38.408611297 +0000 UTC m=+0.136352955 container start 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:44:38 compute-0 podman[253791]: 2025-12-04 10:44:38.412719049 +0000 UTC m=+0.140460667 container attach 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]: {
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:     "0": [
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:         {
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "devices": [
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "/dev/loop3"
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             ],
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_name": "ceph_lv0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_size": "21470642176",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "name": "ceph_lv0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "tags": {
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.cluster_name": "ceph",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.crush_device_class": "",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.encrypted": "0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.objectstore": "bluestore",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.osd_id": "0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.type": "block",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.vdo": "0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.with_tpm": "0"
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             },
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "type": "block",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "vg_name": "ceph_vg0"
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:         }
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:     ],
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:     "1": [
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:         {
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "devices": [
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "/dev/loop4"
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             ],
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_name": "ceph_lv1",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_size": "21470642176",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "name": "ceph_lv1",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "tags": {
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.cluster_name": "ceph",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.crush_device_class": "",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.encrypted": "0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.objectstore": "bluestore",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.osd_id": "1",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.type": "block",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.vdo": "0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.with_tpm": "0"
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             },
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "type": "block",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "vg_name": "ceph_vg1"
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:         }
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:     ],
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:     "2": [
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:         {
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "devices": [
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "/dev/loop5"
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             ],
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_name": "ceph_lv2",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_size": "21470642176",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "name": "ceph_lv2",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "tags": {
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.cluster_name": "ceph",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.crush_device_class": "",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.encrypted": "0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.objectstore": "bluestore",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.osd_id": "2",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.type": "block",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.vdo": "0",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:                 "ceph.with_tpm": "0"
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             },
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "type": "block",
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:             "vg_name": "ceph_vg2"
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:         }
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]:     ]
Dec 04 10:44:38 compute-0 flamboyant_merkle[253808]: }
Dec 04 10:44:38 compute-0 systemd[1]: libpod-5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50.scope: Deactivated successfully.
Dec 04 10:44:38 compute-0 podman[253791]: 2025-12-04 10:44:38.741133268 +0000 UTC m=+0.468874906 container died 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90-merged.mount: Deactivated successfully.
Dec 04 10:44:38 compute-0 podman[253791]: 2025-12-04 10:44:38.790542034 +0000 UTC m=+0.518283672 container remove 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 04 10:44:38 compute-0 systemd[1]: libpod-conmon-5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50.scope: Deactivated successfully.
Dec 04 10:44:38 compute-0 sudo[253714]: pam_unix(sudo:session): session closed for user root
Dec 04 10:44:38 compute-0 sudo[253828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:44:38 compute-0 sudo[253828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:44:38 compute-0 sudo[253828]: pam_unix(sudo:session): session closed for user root
Dec 04 10:44:38 compute-0 sudo[253853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:44:38 compute-0 sudo[253853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:44:39 compute-0 ceph-mon[75358]: pgmap v1043: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 127 KiB/s wr, 9 op/s
Dec 04 10:44:39 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "7a27b9fe-c0b9-4c84-a258-8ecce5900f59", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:39 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:44:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:44:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:39 compute-0 podman[253891]: 2025-12-04 10:44:39.232442069 +0000 UTC m=+0.040487536 container create c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:44:39 compute-0 systemd[1]: Started libpod-conmon-c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8.scope.
Dec 04 10:44:39 compute-0 podman[253891]: 2025-12-04 10:44:39.214132634 +0000 UTC m=+0.022178131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:44:39 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:44:39 compute-0 podman[253891]: 2025-12-04 10:44:39.32557171 +0000 UTC m=+0.133617197 container init c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:44:39 compute-0 podman[253891]: 2025-12-04 10:44:39.33203806 +0000 UTC m=+0.140083527 container start c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:44:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Dec 04 10:44:39 compute-0 boring_lamarr[253907]: 167 167
Dec 04 10:44:39 compute-0 systemd[1]: libpod-c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8.scope: Deactivated successfully.
Dec 04 10:44:39 compute-0 podman[253891]: 2025-12-04 10:44:39.33727249 +0000 UTC m=+0.145317957 container attach c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:44:39 compute-0 podman[253891]: 2025-12-04 10:44:39.338140642 +0000 UTC m=+0.146186139 container died c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:44:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Dec 04 10:44:39 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Dec 04 10:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-abcb6b4239bf0c61c9d2ff59390ea39460968e7d6186f6f3a51265e6df907dc1-merged.mount: Deactivated successfully.
Dec 04 10:44:39 compute-0 podman[253891]: 2025-12-04 10:44:39.371749746 +0000 UTC m=+0.179795203 container remove c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:44:39 compute-0 systemd[1]: libpod-conmon-c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8.scope: Deactivated successfully.
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 83 KiB/s wr, 10 op/s
Dec 04 10:44:39 compute-0 podman[253931]: 2025-12-04 10:44:39.534080524 +0000 UTC m=+0.040457035 container create 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:44:39 compute-0 systemd[1]: Started libpod-conmon-09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325.scope.
Dec 04 10:44:39 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:44:39 compute-0 podman[253931]: 2025-12-04 10:44:39.516179039 +0000 UTC m=+0.022555570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:44:39 compute-0 podman[253931]: 2025-12-04 10:44:39.619976335 +0000 UTC m=+0.126352866 container init 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:44:39 compute-0 podman[253931]: 2025-12-04 10:44:39.626027825 +0000 UTC m=+0.132404336 container start 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:44:39 compute-0 podman[253931]: 2025-12-04 10:44:39.629159963 +0000 UTC m=+0.135536474 container attach 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "admin", "format": "json"}]: dispatch
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Dec 04 10:44:39 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:39.795+0000 7f8423c95640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "format": "json"}]: dispatch
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:39 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:39.974+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b590878f-f5a4-4c4c-97ac-af9c32c4449c' of type subvolume
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b590878f-f5a4-4c4c-97ac-af9c32c4449c' of type subvolume
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c'' moved to trashcan
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:44:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec 04 10:44:40 compute-0 lvm[254026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:44:40 compute-0 lvm[254026]: VG ceph_vg0 finished
Dec 04 10:44:40 compute-0 lvm[254025]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:44:40 compute-0 lvm[254025]: VG ceph_vg1 finished
Dec 04 10:44:40 compute-0 lvm[254028]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:44:40 compute-0 lvm[254028]: VG ceph_vg2 finished
Dec 04 10:44:40 compute-0 lvm[254030]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:44:40 compute-0 lvm[254030]: VG ceph_vg2 finished
Dec 04 10:44:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Dec 04 10:44:40 compute-0 ceph-mon[75358]: osdmap e148: 3 total, 3 up, 3 in
Dec 04 10:44:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Dec 04 10:44:40 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Dec 04 10:44:40 compute-0 relaxed_joliot[253947]: {}
Dec 04 10:44:40 compute-0 systemd[1]: libpod-09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325.scope: Deactivated successfully.
Dec 04 10:44:40 compute-0 systemd[1]: libpod-09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325.scope: Consumed 1.286s CPU time.
Dec 04 10:44:40 compute-0 podman[253931]: 2025-12-04 10:44:40.402038951 +0000 UTC m=+0.908415462 container died 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e-merged.mount: Deactivated successfully.
Dec 04 10:44:40 compute-0 podman[253931]: 2025-12-04 10:44:40.449678013 +0000 UTC m=+0.956054524 container remove 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:44:40 compute-0 systemd[1]: libpod-conmon-09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325.scope: Deactivated successfully.
Dec 04 10:44:40 compute-0 sudo[253853]: pam_unix(sudo:session): session closed for user root
Dec 04 10:44:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:44:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:44:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:44:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:44:40 compute-0 sudo[254043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:44:40 compute-0 sudo[254043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:44:40 compute-0 sudo[254043]: pam_unix(sudo:session): session closed for user root
Dec 04 10:44:41 compute-0 ceph-mon[75358]: pgmap v1045: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 83 KiB/s wr, 10 op/s
Dec 04 10:44:41 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "admin", "format": "json"}]: dispatch
Dec 04 10:44:41 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "format": "json"}]: dispatch
Dec 04 10:44:41 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:41 compute-0 ceph-mon[75358]: osdmap e149: 3 total, 3 up, 3 in
Dec 04 10:44:41 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:44:41 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:44:41 compute-0 nova_compute[244644]: 2025-12-04 10:44:41.425 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:41 compute-0 nova_compute[244644]: 2025-12-04 10:44:41.425 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:44:41 compute-0 nova_compute[244644]: 2025-12-04 10:44:41.425 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:44:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 431 B/s rd, 175 KiB/s wr, 13 op/s
Dec 04 10:44:41 compute-0 nova_compute[244644]: 2025-12-04 10:44:41.600 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:44:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:44:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:44:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:44:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 04 10:44:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:44:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:44:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:44:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:44:42 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:44:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:44:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:42 compute-0 nova_compute[244644]: 2025-12-04 10:44:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:44:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:44:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "342109e9-178b-44e5-bf68-2605580aac2c_9894221c-c337-4fa9-8995-71c106609676", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:342109e9-178b-44e5-bf68-2605580aac2c_9894221c-c337-4fa9-8995-71c106609676, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:342109e9-178b-44e5-bf68-2605580aac2c_9894221c-c337-4fa9-8995-71c106609676, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "342109e9-178b-44e5-bf68-2605580aac2c", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:342109e9-178b-44e5-bf68-2605580aac2c, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:342109e9-178b-44e5-bf68-2605580aac2c, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:43 compute-0 ceph-mon[75358]: pgmap v1047: 321 pgs: 321 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 431 B/s rd, 175 KiB/s wr, 13 op/s
Dec 04 10:44:43 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:44:43 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:44:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 157 KiB/s wr, 16 op/s
Dec 04 10:44:44 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "format": "json"}]: dispatch
Dec 04 10:44:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:44:44 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:44.029+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5260b088-bfa9-4f9a-adc0-a90d452dc12f' of type subvolume
Dec 04 10:44:44 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5260b088-bfa9-4f9a-adc0-a90d452dc12f' of type subvolume
Dec 04 10:44:44 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f'' moved to trashcan
Dec 04 10:44:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:44:44 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec 04 10:44:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:44 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "342109e9-178b-44e5-bf68-2605580aac2c_9894221c-c337-4fa9-8995-71c106609676", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:44 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "342109e9-178b-44e5-bf68-2605580aac2c", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:45 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:44:45.047 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:44:45 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:44:45.048 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:44:45 compute-0 nova_compute[244644]: 2025-12-04 10:44:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:45 compute-0 nova_compute[244644]: 2025-12-04 10:44:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:45 compute-0 ceph-mon[75358]: pgmap v1048: 321 pgs: 321 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 157 KiB/s wr, 16 op/s
Dec 04 10:44:45 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "format": "json"}]: dispatch
Dec 04 10:44:45 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "force": true, "format": "json"}]: dispatch
Dec 04 10:44:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 77 KiB/s wr, 11 op/s
Dec 04 10:44:45 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:44:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:44:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:44:45 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:44:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:44:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:45 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:45 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:44:45 compute-0 podman[254070]: 2025-12-04 10:44:45.965342473 +0000 UTC m=+0.061689252 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:44:46 compute-0 podman[254069]: 2025-12-04 10:44:46.087067853 +0000 UTC m=+0.183108915 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:44:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Dec 04 10:44:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Dec 04 10:44:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:44:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:46 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Dec 04 10:44:47 compute-0 nova_compute[244644]: 2025-12-04 10:44:47.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:47 compute-0 nova_compute[244644]: 2025-12-04 10:44:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:47 compute-0 nova_compute[244644]: 2025-12-04 10:44:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:44:47 compute-0 nova_compute[244644]: 2025-12-04 10:44:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:44:47 compute-0 nova_compute[244644]: 2025-12-04 10:44:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:44:47 compute-0 nova_compute[244644]: 2025-12-04 10:44:47.362 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:44:47 compute-0 nova_compute[244644]: 2025-12-04 10:44:47.363 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:44:47 compute-0 ceph-mon[75358]: pgmap v1049: 321 pgs: 321 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 77 KiB/s wr, 11 op/s
Dec 04 10:44:47 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:44:47 compute-0 ceph-mon[75358]: osdmap e150: 3 total, 3 up, 3 in
Dec 04 10:44:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 174 KiB/s wr, 14 op/s
Dec 04 10:44:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:44:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602699821' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:44:47 compute-0 nova_compute[244644]: 2025-12-04 10:44:47.901 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.042 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.043 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.044 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.044 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:44:48 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:44:48.051 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.112 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.113 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.132 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:44:48 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2602699821' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:44:48 compute-0 sshd-session[254112]: Invalid user ubuntu from 103.179.218.243 port 43190
Dec 04 10:44:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:44:48 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660295175' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.661 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.667 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.686 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.688 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:44:48 compute-0 nova_compute[244644]: 2025-12-04 10:44:48.689 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:44:48 compute-0 sshd-session[254112]: Received disconnect from 103.179.218.243 port 43190:11: Bye Bye [preauth]
Dec 04 10:44:48 compute-0 sshd-session[254112]: Disconnected from invalid user ubuntu 103.179.218.243 port 43190 [preauth]
Dec 04 10:44:49 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:44:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Dec 04 10:44:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Dec 04 10:44:49 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Dec 04 10:44:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:44:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:44:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 04 10:44:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:44:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:44:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:49 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:44:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:44:49 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:44:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:44:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 101 KiB/s wr, 13 op/s
Dec 04 10:44:49 compute-0 ceph-mon[75358]: pgmap v1051: 321 pgs: 321 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 174 KiB/s wr, 14 op/s
Dec 04 10:44:49 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2660295175' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:44:49 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:44:49 compute-0 ceph-mon[75358]: osdmap e151: 3 total, 3 up, 3 in
Dec 04 10:44:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:44:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:44:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:44:49 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:44:49 compute-0 nova_compute[244644]: 2025-12-04 10:44:49.685 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:49 compute-0 nova_compute[244644]: 2025-12-04 10:44:49.685 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:49 compute-0 nova_compute[244644]: 2025-12-04 10:44:49.686 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:44:49 compute-0 nova_compute[244644]: 2025-12-04 10:44:49.686 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:44:51 compute-0 ceph-mon[75358]: pgmap v1053: 321 pgs: 321 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 101 KiB/s wr, 13 op/s
Dec 04 10:44:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 11 op/s
Dec 04 10:44:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:44:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:44:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:44:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:44:53 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:44:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:44:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:44:53 compute-0 ceph-mon[75358]: pgmap v1054: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 11 op/s
Dec 04 10:44:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:44:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:44:53 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:44:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 129 KiB/s wr, 12 op/s
Dec 04 10:44:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:44:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:44:54.911 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:44:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:44:54.912 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:44:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:44:54.912 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:44:55 compute-0 ceph-mon[75358]: pgmap v1055: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 129 KiB/s wr, 12 op/s
Dec 04 10:44:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 27 KiB/s wr, 5 op/s
Dec 04 10:44:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:44:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:44:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:44:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 04 10:44:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:44:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:44:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:44:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:44:56 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:44:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:44:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:44:57 compute-0 ceph-mon[75358]: pgmap v1056: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 27 KiB/s wr, 5 op/s
Dec 04 10:44:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:44:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:44:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:44:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 6 op/s
Dec 04 10:44:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:44:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:44:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:44:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:44:58 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:44:58 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:44:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:44:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:44:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:44:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Dec 04 10:44:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Dec 04 10:44:59 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Dec 04 10:44:59 compute-0 ceph-mon[75358]: pgmap v1057: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 6 op/s
Dec 04 10:44:59 compute-0 ceph-mon[75358]: osdmap e152: 3 total, 3 up, 3 in
Dec 04 10:44:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 54 KiB/s wr, 5 op/s
Dec 04 10:45:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:45:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:00 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:45:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:45:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:00 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:00 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:00 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:00 compute-0 sshd-session[254159]: Invalid user posiflex from 101.47.163.20 port 38624
Dec 04 10:45:00 compute-0 sshd-session[254159]: Received disconnect from 101.47.163.20 port 38624:11: Bye Bye [preauth]
Dec 04 10:45:00 compute-0 sshd-session[254159]: Disconnected from invalid user posiflex 101.47.163.20 port 38624 [preauth]
Dec 04 10:45:01 compute-0 ceph-mon[75358]: pgmap v1059: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 54 KiB/s wr, 5 op/s
Dec 04 10:45:01 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 42 KiB/s wr, 5 op/s
Dec 04 10:45:02 compute-0 ceph-mon[75358]: pgmap v1060: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 42 KiB/s wr, 5 op/s
Dec 04 10:45:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 58 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 70 KiB/s wr, 6 op/s
Dec 04 10:45:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:45:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 04 10:45:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:45:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:45:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:45:04 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:45:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:45:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:04 compute-0 ceph-mon[75358]: pgmap v1061: 321 pgs: 321 active+clean; 58 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 70 KiB/s wr, 6 op/s
Dec 04 10:45:04 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:04 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:04 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:45:04 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:45:04 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 58 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 70 KiB/s wr, 6 op/s
Dec 04 10:45:05 compute-0 podman[254163]: 2025-12-04 10:45:05.94652737 +0000 UTC m=+0.055243492 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 04 10:45:06 compute-0 ceph-mon[75358]: pgmap v1062: 321 pgs: 321 active+clean; 58 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 70 KiB/s wr, 6 op/s
Dec 04 10:45:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 58 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 47 KiB/s wr, 5 op/s
Dec 04 10:45:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:45:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:45:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:07 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:45:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:45:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:08 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec 04 10:45:08 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd/5793e533-143c-4fc0-b4e1-f51624f69c54'.
Dec 04 10:45:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd/.meta.tmp'
Dec 04 10:45:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd/.meta.tmp' to config b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd/.meta'
Dec 04 10:45:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec 04 10:45:08 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "format": "json"}]: dispatch
Dec 04 10:45:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec 04 10:45:08 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec 04 10:45:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:45:08 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:08 compute-0 ceph-mon[75358]: pgmap v1063: 321 pgs: 321 active+clean; 58 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 47 KiB/s wr, 5 op/s
Dec 04 10:45:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:45:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "format": "json"}]: dispatch
Dec 04 10:45:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.543583) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108543657, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2380, "num_deletes": 258, "total_data_size": 2855317, "memory_usage": 2908760, "flush_reason": "Manual Compaction"}
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108560505, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2805124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21290, "largest_seqno": 23669, "table_properties": {"data_size": 2794481, "index_size": 6561, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25596, "raw_average_key_size": 21, "raw_value_size": 2771740, "raw_average_value_size": 2325, "num_data_blocks": 290, "num_entries": 1192, "num_filter_entries": 1192, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844965, "oldest_key_time": 1764844965, "file_creation_time": 1764845108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 16963 microseconds, and 6485 cpu microseconds.
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.560557) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2805124 bytes OK
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.560581) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.562258) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.562274) EVENT_LOG_v1 {"time_micros": 1764845108562270, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.562291) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2844591, prev total WAL file size 2844591, number of live WAL files 2.
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.563069) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2739KB)], [50(7590KB)]
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108563126, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10577678, "oldest_snapshot_seqno": -1}
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5273 keys, 8731800 bytes, temperature: kUnknown
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108615391, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8731800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8694764, "index_size": 22782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 130046, "raw_average_key_size": 24, "raw_value_size": 8598181, "raw_average_value_size": 1630, "num_data_blocks": 950, "num_entries": 5273, "num_filter_entries": 5273, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.615689) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8731800 bytes
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.617353) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.0 rd, 166.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.4 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 5803, records dropped: 530 output_compression: NoCompression
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.617370) EVENT_LOG_v1 {"time_micros": 1764845108617361, "job": 26, "event": "compaction_finished", "compaction_time_micros": 52364, "compaction_time_cpu_micros": 19718, "output_level": 6, "num_output_files": 1, "total_output_size": 8731800, "num_input_records": 5803, "num_output_records": 5273, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108617973, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108619404, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.562996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:08 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 202 B/s rd, 85 KiB/s wr, 8 op/s
Dec 04 10:45:10 compute-0 ceph-mon[75358]: pgmap v1064: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 202 B/s rd, 85 KiB/s wr, 8 op/s
Dec 04 10:45:11 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:45:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 04 10:45:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:45:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:45:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:11 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:45:11 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:45:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:45:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:45:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/462392811' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:45:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:45:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/462392811' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:45:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 72 KiB/s wr, 6 op/s
Dec 04 10:45:11 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:45:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:45:11 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/462392811' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:45:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/462392811' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:45:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec 04 10:45:12 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed/85f48866-3ba8-4e88-a663-1bdf614917fb'.
Dec 04 10:45:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed/.meta.tmp'
Dec 04 10:45:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed/.meta.tmp' to config b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed/.meta'
Dec 04 10:45:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec 04 10:45:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "format": "json"}]: dispatch
Dec 04 10:45:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec 04 10:45:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec 04 10:45:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:45:12 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:12 compute-0 ceph-mon[75358]: pgmap v1065: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 72 KiB/s wr, 6 op/s
Dec 04 10:45:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "format": "json"}]: dispatch
Dec 04 10:45:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 92 KiB/s wr, 9 op/s
Dec 04 10:45:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:14 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:45:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:14 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:45:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:45:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:14 compute-0 ceph-mon[75358]: pgmap v1066: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 92 KiB/s wr, 9 op/s
Dec 04 10:45:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 68 KiB/s wr, 6 op/s
Dec 04 10:45:15 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec 04 10:45:16 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5/6b3c2242-9930-48fe-b0aa-20deac217a1b'.
Dec 04 10:45:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5/.meta.tmp'
Dec 04 10:45:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5/.meta.tmp' to config b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5/.meta'
Dec 04 10:45:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec 04 10:45:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "format": "json"}]: dispatch
Dec 04 10:45:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec 04 10:45:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec 04 10:45:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:45:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:16 compute-0 ceph-mon[75358]: pgmap v1067: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 68 KiB/s wr, 6 op/s
Dec 04 10:45:16 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:16 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "format": "json"}]: dispatch
Dec 04 10:45:16 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:16 compute-0 podman[254185]: 2025-12-04 10:45:16.953151499 +0000 UTC m=+0.055620721 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 04 10:45:16 compute-0 podman[254184]: 2025-12-04 10:45:16.989573643 +0000 UTC m=+0.092732572 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:45:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 69 KiB/s wr, 7 op/s
Dec 04 10:45:18 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:45:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:45:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 04 10:45:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:45:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:45:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:18 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:45:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:45:18 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:45:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:45:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:18 compute-0 ceph-mon[75358]: pgmap v1068: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 69 KiB/s wr, 7 op/s
Dec 04 10:45:18 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:45:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:45:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:45:18 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:45:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 108 KiB/s wr, 10 op/s
Dec 04 10:45:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec 04 10:45:20 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1/72cd89ce-4efe-4b85-aea5-dc01ea42bb59'.
Dec 04 10:45:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1/.meta.tmp'
Dec 04 10:45:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1/.meta.tmp' to config b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1/.meta'
Dec 04 10:45:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec 04 10:45:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "format": "json"}]: dispatch
Dec 04 10:45:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec 04 10:45:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec 04 10:45:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:45:20 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:20 compute-0 ceph-mon[75358]: pgmap v1069: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 108 KiB/s wr, 10 op/s
Dec 04 10:45:20 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:20 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "format": "json"}]: dispatch
Dec 04 10:45:20 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 6 op/s
Dec 04 10:45:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:45:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:45:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:45:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:45:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:21 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:21 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:21 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:45:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8145 writes, 31K keys, 8145 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8145 writes, 1973 syncs, 4.13 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2453 writes, 7270 keys, 2453 commit groups, 1.0 writes per commit group, ingest: 9.86 MB, 0.02 MB/s
                                           Interval WAL: 2453 writes, 1058 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 10:45:22 compute-0 ceph-mon[75358]: pgmap v1070: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 6 op/s
Dec 04 10:45:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:45:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 123 KiB/s wr, 10 op/s
Dec 04 10:45:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "format": "json"}]: dispatch
Dec 04 10:45:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:45:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:45:24 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:45:24.947+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7c4e2c1-3b68-4928-815d-84ba9442cbf1' of type subvolume
Dec 04 10:45:24 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7c4e2c1-3b68-4928-815d-84ba9442cbf1' of type subvolume
Dec 04 10:45:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "force": true, "format": "json"}]: dispatch
Dec 04 10:45:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec 04 10:45:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1'' moved to trashcan
Dec 04 10:45:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:45:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec 04 10:45:25 compute-0 ceph-mon[75358]: pgmap v1071: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 123 KiB/s wr, 10 op/s
Dec 04 10:45:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:45:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:45:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 04 10:45:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:45:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:45:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:45:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:45:25 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:45:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:45:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 92 KiB/s wr, 7 op/s
Dec 04 10:45:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "format": "json"}]: dispatch
Dec 04 10:45:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "force": true, "format": "json"}]: dispatch
Dec 04 10:45:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:45:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:45:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:45:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:45:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:45:26
Dec 04 10:45:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:45:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:45:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec 04 10:45:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:45:27 compute-0 ceph-mon[75358]: pgmap v1072: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 92 KiB/s wr, 7 op/s
Dec 04 10:45:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 93 KiB/s wr, 8 op/s
Dec 04 10:45:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:45:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "format": "json"}]: dispatch
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:45:28 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:45:28.758+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '51dacf2e-4d8f-4133-9ae7-8b2784f31cc5' of type subvolume
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '51dacf2e-4d8f-4133-9ae7-8b2784f31cc5' of type subvolume
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "force": true, "format": "json"}]: dispatch
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5'' moved to trashcan
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:45:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:45:28 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:45:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:45:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:45:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2807 syncs, 3.71 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3259 writes, 9856 keys, 3259 commit groups, 1.0 writes per commit group, ingest: 8.44 MB, 0.01 MB/s
                                           Interval WAL: 3259 writes, 1412 syncs, 2.31 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 10:45:29 compute-0 ceph-mon[75358]: pgmap v1073: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 93 KiB/s wr, 8 op/s
Dec 04 10:45:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:45:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 126 KiB/s wr, 10 op/s
Dec 04 10:45:30 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "format": "json"}]: dispatch
Dec 04 10:45:30 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "force": true, "format": "json"}]: dispatch
Dec 04 10:45:30 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:31 compute-0 ceph-mon[75358]: pgmap v1074: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 126 KiB/s wr, 10 op/s
Dec 04 10:45:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 87 KiB/s wr, 8 op/s
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "format": "json"}]: dispatch
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:45:32 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:45:32.097+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f4fd84f8-9ca9-412b-a602-9496343f58ed' of type subvolume
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f4fd84f8-9ca9-412b-a602-9496343f58ed' of type subvolume
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "force": true, "format": "json"}]: dispatch
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed'' moved to trashcan
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:45:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:45:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 04 10:45:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:45:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:45:32 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:45:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:33 compute-0 ceph-mon[75358]: pgmap v1075: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 87 KiB/s wr, 8 op/s
Dec 04 10:45:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "format": "json"}]: dispatch
Dec 04 10:45:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "force": true, "format": "json"}]: dispatch
Dec 04 10:45:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:45:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:45:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:45:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:45:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:45:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 146 KiB/s wr, 12 op/s
Dec 04 10:45:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:35 compute-0 ceph-mon[75358]: pgmap v1076: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 146 KiB/s wr, 12 op/s
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 7 op/s
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "format": "json"}]: dispatch
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ab4df956-bd5e-4998-a6ef-078628986afd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ab4df956-bd5e-4998-a6ef-078628986afd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:45:35 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:45:35.611+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab4df956-bd5e-4998-a6ef-078628986afd' of type subvolume
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab4df956-bd5e-4998-a6ef-078628986afd' of type subvolume
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "force": true, "format": "json"}]: dispatch
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd'' moved to trashcan
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:45:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:45:35 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:45:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:45:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:45:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:36 compute-0 podman[254228]: 2025-12-04 10:45:36.978225532 +0000 UTC m=+0.078702601 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667238154743242 of space, bias 1.0, pg target 0.2001714464229726 quantized to 32 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0003141884134020231 of space, bias 4.0, pg target 0.3770260960824277 quantized to 16 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:45:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 125 KiB/s wr, 10 op/s
Dec 04 10:45:37 compute-0 ceph-mon[75358]: pgmap v1077: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 7 op/s
Dec 04 10:45:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "format": "json"}]: dispatch
Dec 04 10:45:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "force": true, "format": "json"}]: dispatch
Dec 04 10:45:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:45:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:45:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8989 writes, 34K keys, 8989 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8989 writes, 2320 syncs, 3.87 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3286 writes, 10K keys, 3286 commit groups, 1.0 writes per commit group, ingest: 13.71 MB, 0.02 MB/s
                                           Interval WAL: 3286 writes, 1418 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 10:45:38 compute-0 ceph-mon[75358]: pgmap v1078: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 125 KiB/s wr, 10 op/s
Dec 04 10:45:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:45:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:45:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:45:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 04 10:45:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:45:39 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:45:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:39 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:45:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:45:39 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:45:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:45:39 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 125 KiB/s wr, 11 op/s
Dec 04 10:45:39 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:45:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:45:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:45:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:45:39 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:45:40 compute-0 sudo[254250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:45:40 compute-0 sudo[254250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:40 compute-0 sudo[254250]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:40 compute-0 sudo[254275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:45:40 compute-0 sudo[254275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:41 compute-0 ceph-mon[75358]: pgmap v1079: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 125 KiB/s wr, 11 op/s
Dec 04 10:45:41 compute-0 sudo[254275]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:41 compute-0 sudo[254330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:45:41 compute-0 sudo[254330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:41 compute-0 sudo[254330]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:41 compute-0 sudo[254355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 04 10:45:41 compute-0 sudo[254355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 92 KiB/s wr, 9 op/s
Dec 04 10:45:41 compute-0 sudo[254355]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:45:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:45:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:45:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:45:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:45:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:45:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:45:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:45:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:45:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:45:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:45:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:45:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:45:41 compute-0 sudo[254399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:45:41 compute-0 sudo[254399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:41 compute-0 sudo[254399]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:41 compute-0 sudo[254424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:45:41 compute-0 sudo[254424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:42 compute-0 podman[254463]: 2025-12-04 10:45:42.283732223 +0000 UTC m=+0.103097360 container create 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 04 10:45:42 compute-0 podman[254463]: 2025-12-04 10:45:42.202293705 +0000 UTC m=+0.021658862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:45:42 compute-0 systemd[1]: Started libpod-conmon-1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783.scope.
Dec 04 10:45:42 compute-0 nova_compute[244644]: 2025-12-04 10:45:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:42 compute-0 nova_compute[244644]: 2025-12-04 10:45:42.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:45:42 compute-0 nova_compute[244644]: 2025-12-04 10:45:42.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:45:42 compute-0 nova_compute[244644]: 2025-12-04 10:45:42.352 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:45:42 compute-0 nova_compute[244644]: 2025-12-04 10:45:42.353 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:45:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:42 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:45:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:42 compute-0 podman[254463]: 2025-12-04 10:45:42.97288472 +0000 UTC m=+0.792249887 container init 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:45:42 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:45:42 compute-0 ceph-mon[75358]: pgmap v1080: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 92 KiB/s wr, 9 op/s
Dec 04 10:45:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:45:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:45:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:45:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:45:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:45:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:42 compute-0 podman[254463]: 2025-12-04 10:45:42.980840076 +0000 UTC m=+0.800205213 container start 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:45:42 compute-0 agitated_bose[254480]: 167 167
Dec 04 10:45:42 compute-0 systemd[1]: libpod-1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783.scope: Deactivated successfully.
Dec 04 10:45:42 compute-0 podman[254463]: 2025-12-04 10:45:42.987771176 +0000 UTC m=+0.807136343 container attach 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:45:42 compute-0 podman[254463]: 2025-12-04 10:45:42.988285038 +0000 UTC m=+0.807650175 container died 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:45:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:45:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:43 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c9d5db16fe0e9ee1ba840e6216aef8384abe22e63c5501774948a5e70ed2001-merged.mount: Deactivated successfully.
Dec 04 10:45:43 compute-0 podman[254463]: 2025-12-04 10:45:43.078371599 +0000 UTC m=+0.897736726 container remove 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:45:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:43 compute-0 systemd[1]: libpod-conmon-1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783.scope: Deactivated successfully.
Dec 04 10:45:43 compute-0 podman[254502]: 2025-12-04 10:45:43.265722336 +0000 UTC m=+0.064344880 container create 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 04 10:45:43 compute-0 systemd[1]: Started libpod-conmon-3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d.scope.
Dec 04 10:45:43 compute-0 podman[254502]: 2025-12-04 10:45:43.225476638 +0000 UTC m=+0.024099202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:45:43 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:43 compute-0 podman[254502]: 2025-12-04 10:45:43.396243468 +0000 UTC m=+0.194866062 container init 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:45:43 compute-0 podman[254502]: 2025-12-04 10:45:43.404649124 +0000 UTC m=+0.203271678 container start 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 04 10:45:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 118 KiB/s wr, 10 op/s
Dec 04 10:45:43 compute-0 podman[254502]: 2025-12-04 10:45:43.504259418 +0000 UTC m=+0.302882052 container attach 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:45:43 compute-0 ceph-mgr[75651]: [devicehealth INFO root] Check health
Dec 04 10:45:43 compute-0 boring_wilson[254518]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:45:43 compute-0 boring_wilson[254518]: --> All data devices are unavailable
Dec 04 10:45:43 compute-0 systemd[1]: libpod-3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d.scope: Deactivated successfully.
Dec 04 10:45:43 compute-0 podman[254502]: 2025-12-04 10:45:43.884260191 +0000 UTC m=+0.682882735 container died 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332-merged.mount: Deactivated successfully.
Dec 04 10:45:44 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:44 compute-0 podman[254502]: 2025-12-04 10:45:44.070794528 +0000 UTC m=+0.869417082 container remove 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 04 10:45:44 compute-0 systemd[1]: libpod-conmon-3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d.scope: Deactivated successfully.
Dec 04 10:45:44 compute-0 sudo[254424]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:44 compute-0 sudo[254551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:45:44 compute-0 sudo[254551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:44 compute-0 sudo[254551]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:44 compute-0 sudo[254576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:45:44 compute-0 sudo[254576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:44 compute-0 podman[254613]: 2025-12-04 10:45:44.555410668 +0000 UTC m=+0.040818193 container create eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:45:44 compute-0 systemd[1]: Started libpod-conmon-eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895.scope.
Dec 04 10:45:44 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:45:44 compute-0 podman[254613]: 2025-12-04 10:45:44.630765416 +0000 UTC m=+0.116172961 container init eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:45:44 compute-0 podman[254613]: 2025-12-04 10:45:44.538153824 +0000 UTC m=+0.023561369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:45:44 compute-0 podman[254613]: 2025-12-04 10:45:44.636871006 +0000 UTC m=+0.122278531 container start eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:45:44 compute-0 podman[254613]: 2025-12-04 10:45:44.63989922 +0000 UTC m=+0.125306745 container attach eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:45:44 compute-0 zen_easley[254629]: 167 167
Dec 04 10:45:44 compute-0 systemd[1]: libpod-eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895.scope: Deactivated successfully.
Dec 04 10:45:44 compute-0 podman[254613]: 2025-12-04 10:45:44.643800826 +0000 UTC m=+0.129208371 container died eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 04 10:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dcb050ebfb1117d7aedcae894343d170ef9fd713ddd806043363e70ed34d844-merged.mount: Deactivated successfully.
Dec 04 10:45:44 compute-0 podman[254613]: 2025-12-04 10:45:44.684219358 +0000 UTC m=+0.169626883 container remove eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:45:44 compute-0 systemd[1]: libpod-conmon-eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895.scope: Deactivated successfully.
Dec 04 10:45:44 compute-0 podman[254653]: 2025-12-04 10:45:44.818682357 +0000 UTC m=+0.024560284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:45:45 compute-0 podman[254653]: 2025-12-04 10:45:45.051803205 +0000 UTC m=+0.257681122 container create aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:45:45 compute-0 ceph-mon[75358]: pgmap v1081: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 118 KiB/s wr, 10 op/s
Dec 04 10:45:45 compute-0 systemd[1]: Started libpod-conmon-aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234.scope.
Dec 04 10:45:45 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:45 compute-0 podman[254653]: 2025-12-04 10:45:45.140723118 +0000 UTC m=+0.346601045 container init aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:45:45 compute-0 podman[254653]: 2025-12-04 10:45:45.150383175 +0000 UTC m=+0.356261082 container start aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:45:45 compute-0 podman[254653]: 2025-12-04 10:45:45.155559871 +0000 UTC m=+0.361437808 container attach aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:45:45 compute-0 nova_compute[244644]: 2025-12-04 10:45:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]: {
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:     "0": [
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:         {
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "devices": [
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "/dev/loop3"
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             ],
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_name": "ceph_lv0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_size": "21470642176",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "name": "ceph_lv0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "tags": {
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.cluster_name": "ceph",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.crush_device_class": "",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.encrypted": "0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.objectstore": "bluestore",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.osd_id": "0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.type": "block",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.vdo": "0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.with_tpm": "0"
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             },
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "type": "block",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "vg_name": "ceph_vg0"
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:         }
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:     ],
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:     "1": [
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:         {
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "devices": [
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "/dev/loop4"
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             ],
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_name": "ceph_lv1",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_size": "21470642176",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "name": "ceph_lv1",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "tags": {
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.cluster_name": "ceph",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.crush_device_class": "",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.encrypted": "0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.objectstore": "bluestore",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.osd_id": "1",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.type": "block",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.vdo": "0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.with_tpm": "0"
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             },
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "type": "block",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "vg_name": "ceph_vg1"
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:         }
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:     ],
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:     "2": [
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:         {
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "devices": [
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "/dev/loop5"
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             ],
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_name": "ceph_lv2",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_size": "21470642176",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "name": "ceph_lv2",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "tags": {
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.cluster_name": "ceph",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.crush_device_class": "",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.encrypted": "0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.objectstore": "bluestore",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.osd_id": "2",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.type": "block",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.vdo": "0",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:                 "ceph.with_tpm": "0"
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             },
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "type": "block",
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:             "vg_name": "ceph_vg2"
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:         }
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]:     ]
Dec 04 10:45:45 compute-0 unruffled_khorana[254670]: }
Dec 04 10:45:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 58 KiB/s wr, 6 op/s
Dec 04 10:45:45 compute-0 systemd[1]: libpod-aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234.scope: Deactivated successfully.
Dec 04 10:45:45 compute-0 podman[254653]: 2025-12-04 10:45:45.509427473 +0000 UTC m=+0.715305380 container died aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:45:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f-merged.mount: Deactivated successfully.
Dec 04 10:45:45 compute-0 podman[254653]: 2025-12-04 10:45:45.558111148 +0000 UTC m=+0.763989045 container remove aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:45:45 compute-0 systemd[1]: libpod-conmon-aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234.scope: Deactivated successfully.
Dec 04 10:45:45 compute-0 sudo[254576]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:45 compute-0 sudo[254691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:45:45 compute-0 sudo[254691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:45 compute-0 sudo[254691]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:45 compute-0 sudo[254716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:45:45 compute-0 sudo[254716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:46 compute-0 podman[254752]: 2025-12-04 10:45:46.002320006 +0000 UTC m=+0.045723583 container create 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:45:46 compute-0 systemd[1]: Started libpod-conmon-216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c.scope.
Dec 04 10:45:46 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:45:46 compute-0 podman[254752]: 2025-12-04 10:45:46.078911645 +0000 UTC m=+0.122315152 container init 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:45:46 compute-0 podman[254752]: 2025-12-04 10:45:45.983419673 +0000 UTC m=+0.026823200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:45:46 compute-0 podman[254752]: 2025-12-04 10:45:46.084903362 +0000 UTC m=+0.128306869 container start 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:45:46 compute-0 podman[254752]: 2025-12-04 10:45:46.088416979 +0000 UTC m=+0.131820486 container attach 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:45:46 compute-0 friendly_wilson[254768]: 167 167
Dec 04 10:45:46 compute-0 systemd[1]: libpod-216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c.scope: Deactivated successfully.
Dec 04 10:45:46 compute-0 podman[254752]: 2025-12-04 10:45:46.091453283 +0000 UTC m=+0.134856790 container died 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef4e9ebdd23098cbd9be01f32ba2121d214c8d8913c6c72830245e8f1f788b9e-merged.mount: Deactivated successfully.
Dec 04 10:45:46 compute-0 podman[254752]: 2025-12-04 10:45:46.138301132 +0000 UTC m=+0.181704629 container remove 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:45:46 compute-0 systemd[1]: libpod-conmon-216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c.scope: Deactivated successfully.
Dec 04 10:45:46 compute-0 podman[254792]: 2025-12-04 10:45:46.294539606 +0000 UTC m=+0.041064278 container create d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:45:46 compute-0 systemd[1]: Started libpod-conmon-d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c.scope.
Dec 04 10:45:46 compute-0 nova_compute[244644]: 2025-12-04 10:45:46.336 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:46 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:45:46 compute-0 podman[254792]: 2025-12-04 10:45:46.275924129 +0000 UTC m=+0.022448831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:45:46 compute-0 podman[254792]: 2025-12-04 10:45:46.384062332 +0000 UTC m=+0.130587034 container init d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:45:46 compute-0 podman[254792]: 2025-12-04 10:45:46.392753825 +0000 UTC m=+0.139278507 container start d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 04 10:45:46 compute-0 podman[254792]: 2025-12-04 10:45:46.396546528 +0000 UTC m=+0.143071240 container attach d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:45:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:45:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 04 10:45:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:45:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:45:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:45:46 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:45:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:45:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:47 compute-0 ceph-mon[75358]: pgmap v1082: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 58 KiB/s wr, 6 op/s
Dec 04 10:45:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:45:47 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:45:47 compute-0 lvm[254910]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:45:47 compute-0 lvm[254909]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:45:47 compute-0 lvm[254909]: VG ceph_vg1 finished
Dec 04 10:45:47 compute-0 lvm[254910]: VG ceph_vg2 finished
Dec 04 10:45:47 compute-0 lvm[254907]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:45:47 compute-0 lvm[254907]: VG ceph_vg0 finished
Dec 04 10:45:47 compute-0 podman[254886]: 2025-12-04 10:45:47.144577981 +0000 UTC m=+0.055906633 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 04 10:45:47 compute-0 podman[254884]: 2025-12-04 10:45:47.184147731 +0000 UTC m=+0.097974715 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 04 10:45:47 compute-0 flamboyant_turing[254808]: {}
Dec 04 10:45:47 compute-0 systemd[1]: libpod-d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c.scope: Deactivated successfully.
Dec 04 10:45:47 compute-0 systemd[1]: libpod-d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c.scope: Consumed 1.422s CPU time.
Dec 04 10:45:47 compute-0 podman[254792]: 2025-12-04 10:45:47.227687899 +0000 UTC m=+0.974212581 container died d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a-merged.mount: Deactivated successfully.
Dec 04 10:45:47 compute-0 podman[254792]: 2025-12-04 10:45:47.278384004 +0000 UTC m=+1.024908686 container remove d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:45:47 compute-0 systemd[1]: libpod-conmon-d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c.scope: Deactivated successfully.
Dec 04 10:45:47 compute-0 sudo[254716]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:45:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:47 compute-0 nova_compute[244644]: 2025-12-04 10:45:47.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:47 compute-0 nova_compute[244644]: 2025-12-04 10:45:47.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:45:47 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:47 compute-0 nova_compute[244644]: 2025-12-04 10:45:47.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:45:47 compute-0 nova_compute[244644]: 2025-12-04 10:45:47.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:45:47 compute-0 nova_compute[244644]: 2025-12-04 10:45:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:45:47 compute-0 nova_compute[244644]: 2025-12-04 10:45:47.362 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:45:47 compute-0 nova_compute[244644]: 2025-12-04 10:45:47.362 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:45:47 compute-0 sudo[254946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:45:47 compute-0 sudo[254946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:45:47 compute-0 sudo[254946]: pam_unix(sudo:session): session closed for user root
Dec 04 10:45:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 62 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 94 KiB/s wr, 10 op/s
Dec 04 10:45:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:45:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573814568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:45:47 compute-0 nova_compute[244644]: 2025-12-04 10:45:47.915 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.089 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.090 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5032MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.090 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.091 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.158 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.159 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.176 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:45:48 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:48 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:48 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:45:48 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2573814568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:45:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:45:48 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275460644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.699 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.705 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.721 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.723 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:45:48 compute-0 nova_compute[244644]: 2025-12-04 10:45:48.723 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:45:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:49 compute-0 ceph-mon[75358]: pgmap v1083: 321 pgs: 321 active+clean; 62 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 94 KiB/s wr, 10 op/s
Dec 04 10:45:49 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4275460644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:45:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 62 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 63 KiB/s wr, 7 op/s
Dec 04 10:45:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:45:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:45:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:50 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:45:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:45:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:50 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:45:50.364 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:45:50 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:45:50.366 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:45:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:50 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:50 compute-0 nova_compute[244644]: 2025-12-04 10:45:50.719 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:50 compute-0 nova_compute[244644]: 2025-12-04 10:45:50.720 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:50 compute-0 nova_compute[244644]: 2025-12-04 10:45:50.720 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:50 compute-0 nova_compute[244644]: 2025-12-04 10:45:50.720 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:45:50 compute-0 nova_compute[244644]: 2025-12-04 10:45:50.720 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:45:51 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:45:51.368 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:45:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 62 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 62 KiB/s wr, 6 op/s
Dec 04 10:45:51 compute-0 ceph-mon[75358]: pgmap v1084: 321 pgs: 321 active+clean; 62 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 63 KiB/s wr, 7 op/s
Dec 04 10:45:51 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:45:52 compute-0 ceph-mon[75358]: pgmap v1085: 321 pgs: 321 active+clean; 62 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 62 KiB/s wr, 6 op/s
Dec 04 10:45:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 8 op/s
Dec 04 10:45:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:45:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 04 10:45:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:45:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:45:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:45:53 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:45:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:45:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:45:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.361782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154361831, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 866, "num_deletes": 260, "total_data_size": 892661, "memory_usage": 909800, "flush_reason": "Manual Compaction"}
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154369812, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 871470, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23670, "largest_seqno": 24535, "table_properties": {"data_size": 867155, "index_size": 1903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10179, "raw_average_key_size": 19, "raw_value_size": 858074, "raw_average_value_size": 1619, "num_data_blocks": 85, "num_entries": 530, "num_filter_entries": 530, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845109, "oldest_key_time": 1764845109, "file_creation_time": 1764845154, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8129 microseconds, and 3639 cpu microseconds.
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.369906) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 871470 bytes OK
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.369940) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371403) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371420) EVENT_LOG_v1 {"time_micros": 1764845154371414, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371449) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 888115, prev total WAL file size 888115, number of live WAL files 2.
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371995) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(851KB)], [53(8527KB)]
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154372066, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9603270, "oldest_snapshot_seqno": -1}
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5268 keys, 9503917 bytes, temperature: kUnknown
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154432636, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9503917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9465394, "index_size": 24269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 131392, "raw_average_key_size": 24, "raw_value_size": 9367489, "raw_average_value_size": 1778, "num_data_blocks": 1012, "num_entries": 5268, "num_filter_entries": 5268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845154, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.432890) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9503917 bytes
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.434440) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.3 rd, 156.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(21.9) write-amplify(10.9) OK, records in: 5803, records dropped: 535 output_compression: NoCompression
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.434457) EVENT_LOG_v1 {"time_micros": 1764845154434448, "job": 28, "event": "compaction_finished", "compaction_time_micros": 60647, "compaction_time_cpu_micros": 22746, "output_level": 6, "num_output_files": 1, "total_output_size": 9503917, "num_input_records": 5803, "num_output_records": 5268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154434688, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154436054, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:54 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:45:54 compute-0 ceph-mon[75358]: pgmap v1086: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 8 op/s
Dec 04 10:45:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:45:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:45:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:45:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:45:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:45:54.912 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:45:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:45:54.913 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:45:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:45:54.913 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:45:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 5 op/s
Dec 04 10:45:55 compute-0 sshd-session[255016]: Invalid user syncthing from 217.154.62.22 port 36434
Dec 04 10:45:55 compute-0 sshd-session[255016]: Received disconnect from 217.154.62.22 port 36434:11: Bye Bye [preauth]
Dec 04 10:45:55 compute-0 sshd-session[255016]: Disconnected from invalid user syncthing 217.154.62.22 port 36434 [preauth]
Dec 04 10:45:56 compute-0 ceph-mon[75358]: pgmap v1087: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 5 op/s
Dec 04 10:45:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:45:56 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/709503a4-ece9-4e76-b07e-7f97746dfdf4'.
Dec 04 10:45:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec 04 10:45:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec 04 10:45:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:45:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "format": "json"}]: dispatch
Dec 04 10:45:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:45:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:45:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:45:56 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:45:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:57 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:45:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:45:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:45:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 9 op/s
Dec 04 10:45:57 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:45:57 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "format": "json"}]: dispatch
Dec 04 10:45:57 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:45:57 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:45:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:45:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:45:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:45:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:45:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:45:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:45:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:45:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:45:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:45:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:45:59 compute-0 ceph-mon[75358]: pgmap v1088: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 9 op/s
Dec 04 10:45:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 5 op/s
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2", "format": "json"}]: dispatch
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:46:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 04 10:46:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:46:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:46:00 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:46:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:01 compute-0 ceph-mon[75358]: pgmap v1089: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 5 op/s
Dec 04 10:46:01 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2", "format": "json"}]: dispatch
Dec 04 10:46:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:46:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:46:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 5 op/s
Dec 04 10:46:02 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:02 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:03 compute-0 ceph-mon[75358]: pgmap v1090: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 5 op/s
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 89 KiB/s wr, 8 op/s
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2", "target_sub_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, target_sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/6434388b-13b0-44fd-9f14-bc4785113c76'.
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp'
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp' to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta'
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 4fdc95a4-c293-4166-b342-259be81a8d49 for path b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043'
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, target_sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, cbd234cb-faf5-4e19-a1b6-ca47791b1043)
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, cbd234cb-faf5-4e19-a1b6-ca47791b1043) -- by 0 seconds
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp'
Dec 04 10:46:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp' to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta'
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:04 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:46:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:46:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:04 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.snap/eb21836e-156d-4fd6-adb6-75fc9fe014e2/709503a4-ece9-4e76-b07e-7f97746dfdf4' to b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/6434388b-13b0-44fd-9f14-bc4785113c76'
Dec 04 10:46:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:46:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp'
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp' to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta'
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] untracking 4fdc95a4-c293-4166-b342-259be81a8d49
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp'
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp' to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta'
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, cbd234cb-faf5-4e19-a1b6-ca47791b1043)
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] Exception VolumeException was raised. Apparently an entry from the metadata file of clone source was removed because one of the clone job(s) has completed/cancelled. Therefore ignoring and proceeding Printing the exception: -22 (error fetching subvolume metadata)
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [progress WARNING root] complete: ev mgr-vol-ongoing-clones does not exist
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Dec 04 10:46:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f8435ce5760>
Dec 04 10:46:05 compute-0 ceph-mon[75358]: pgmap v1091: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 89 KiB/s wr, 8 op/s
Dec 04 10:46:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2", "target_sub_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:46:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:46:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:46:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 65 KiB/s wr, 6 op/s
Dec 04 10:46:05 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.iwufnj(active, since 31m)
Dec 04 10:46:06 compute-0 ceph-mon[75358]: pgmap v1092: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 65 KiB/s wr, 6 op/s
Dec 04 10:46:06 compute-0 ceph-mon[75358]: mgrmap e14: compute-0.iwufnj(active, since 31m)
Dec 04 10:46:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 63 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 116 KiB/s wr, 12 op/s
Dec 04 10:46:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:46:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 04 10:46:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:46:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:46:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:46:07 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:46:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:46:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:07 compute-0 podman[255057]: 2025-12-04 10:46:07.953916487 +0000 UTC m=+0.058885096 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec 04 10:46:08 compute-0 ceph-mon[75358]: pgmap v1093: 321 pgs: 321 active+clean; 63 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 116 KiB/s wr, 12 op/s
Dec 04 10:46:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:46:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:46:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 63 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 90 KiB/s wr, 9 op/s
Dec 04 10:46:10 compute-0 ceph-mon[75358]: pgmap v1094: 321 pgs: 321 active+clean; 63 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 90 KiB/s wr, 9 op/s
Dec 04 10:46:11 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:46:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:46:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:11 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:46:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:46:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 63 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 90 KiB/s wr, 9 op/s
Dec 04 10:46:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:46:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471404368' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:46:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:46:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471404368' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:46:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:46:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1471404368' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:46:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1471404368' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:46:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:13 compute-0 ceph-mon[75358]: pgmap v1095: 321 pgs: 321 active+clean; 63 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 90 KiB/s wr, 9 op/s
Dec 04 10:46:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 64 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 132 KiB/s wr, 13 op/s
Dec 04 10:46:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:14 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:46:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 04 10:46:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:46:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:46:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:14 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:46:14 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:46:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:46:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:15 compute-0 ceph-mon[75358]: pgmap v1096: 321 pgs: 321 active+clean; 64 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 132 KiB/s wr, 13 op/s
Dec 04 10:46:15 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:15 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:15 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:46:15 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:46:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 64 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 93 KiB/s wr, 10 op/s
Dec 04 10:46:16 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:17 compute-0 ceph-mon[75358]: pgmap v1097: 321 pgs: 321 active+clean; 64 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 93 KiB/s wr, 10 op/s
Dec 04 10:46:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 111 KiB/s wr, 12 op/s
Dec 04 10:46:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:46:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:46:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:17 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:46:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:46:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:17 compute-0 podman[255080]: 2025-12-04 10:46:17.951264588 +0000 UTC m=+0.050068119 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:46:17 compute-0 podman[255079]: 2025-12-04 10:46:17.9810881 +0000 UTC m=+0.082546877 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec 04 10:46:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:19 compute-0 ceph-mon[75358]: pgmap v1098: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 111 KiB/s wr, 12 op/s
Dec 04 10:46:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:46:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 6 op/s
Dec 04 10:46:21 compute-0 ceph-mon[75358]: pgmap v1099: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 6 op/s
Dec 04 10:46:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:46:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 04 10:46:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:46:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:46:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:46:21 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:46:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:46:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 6 op/s
Dec 04 10:46:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:46:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:46:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:46:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:46:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec 04 10:46:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec 04 10:46:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:46:22 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:46:22 compute-0 sshd-session[255125]: Invalid user terraria from 107.175.213.239 port 37430
Dec 04 10:46:23 compute-0 sshd-session[255125]: Received disconnect from 107.175.213.239 port 37430:11: Bye Bye [preauth]
Dec 04 10:46:23 compute-0 sshd-session[255125]: Disconnected from invalid user terraria 107.175.213.239 port 37430 [preauth]
Dec 04 10:46:23 compute-0 ceph-mon[75358]: pgmap v1100: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 6 op/s
Dec 04 10:46:23 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:46:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 9 op/s
Dec 04 10:46:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:24 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:46:24 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:46:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:46:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:46:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:46:24 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:46:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:46:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:24 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f/a2cad645-958b-479a-9a2c-83321704920d'.
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f/.meta.tmp'
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f/.meta.tmp' to config b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f/.meta'
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "format": "json"}]: dispatch
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec 04 10:46:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:46:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:46:25 compute-0 ceph-mon[75358]: pgmap v1101: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 9 op/s
Dec 04 10:46:25 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:46:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:46:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:25 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:46:25 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "format": "json"}]: dispatch
Dec 04 10:46:25 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:46:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 55 KiB/s wr, 5 op/s
Dec 04 10:46:25 compute-0 sshd-session[255127]: Invalid user ventas01 from 103.179.218.243 port 43294
Dec 04 10:46:25 compute-0 sshd-session[255127]: Received disconnect from 103.179.218.243 port 43294:11: Bye Bye [preauth]
Dec 04 10:46:25 compute-0 sshd-session[255127]: Disconnected from invalid user ventas01 103.179.218.243 port 43294 [preauth]
Dec 04 10:46:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:46:26
Dec 04 10:46:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:46:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:46:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', 'default.rgw.log', 'vms', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data']
Dec 04 10:46:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:46:27 compute-0 ceph-mon[75358]: pgmap v1102: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 55 KiB/s wr, 5 op/s
Dec 04 10:46:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 84 KiB/s wr, 8 op/s
Dec 04 10:46:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:46:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:46:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:46:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:46:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 04 10:46:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:46:28 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:46:28 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:46:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:46:28 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec 04 10:46:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec 04 10:46:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:29 compute-0 ceph-mon[75358]: pgmap v1103: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 84 KiB/s wr, 8 op/s
Dec 04 10:46:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:46:29 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:46:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s wr, 6 op/s
Dec 04 10:46:30 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 04 10:46:31 compute-0 ceph-mon[75358]: pgmap v1104: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s wr, 6 op/s
Dec 04 10:46:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:46:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:46:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:46:31 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:46:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s wr, 6 op/s
Dec 04 10:46:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:46:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:31 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "format": "json"}]: dispatch
Dec 04 10:46:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fdc591ae-48a2-4089-a539-01382bacd19f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fdc591ae-48a2-4089-a539-01382bacd19f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:32 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:32.430+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc591ae-48a2-4089-a539-01382bacd19f' of type subvolume
Dec 04 10:46:32 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc591ae-48a2-4089-a539-01382bacd19f' of type subvolume
Dec 04 10:46:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "force": true, "format": "json"}]: dispatch
Dec 04 10:46:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec 04 10:46:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f'' moved to trashcan
Dec 04 10:46:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:46:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec 04 10:46:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:46:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 116 KiB/s wr, 10 op/s
Dec 04 10:46:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:46:33 compute-0 ceph-mon[75358]: pgmap v1105: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s wr, 6 op/s
Dec 04 10:46:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "format": "json"}]: dispatch
Dec 04 10:46:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "force": true, "format": "json"}]: dispatch
Dec 04 10:46:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:46:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:35 compute-0 ceph-mon[75358]: pgmap v1106: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 116 KiB/s wr, 10 op/s
Dec 04 10:46:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:46:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:46:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 04 10:46:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:46:35 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:46:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:46:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:46:35 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:46:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:46:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 80 KiB/s wr, 7 op/s
Dec 04 10:46:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:46:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:46:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:46:36 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:46:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:46:37 compute-0 ceph-mon[75358]: pgmap v1107: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 80 KiB/s wr, 7 op/s
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667238154743242 of space, bias 1.0, pg target 0.2001714464229726 quantized to 32 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00038834688462743443 of space, bias 4.0, pg target 0.4660162615529213 quantized to 16 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:46:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 107 KiB/s wr, 10 op/s
Dec 04 10:46:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:46:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:46:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:38 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:46:38 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:46:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:38 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:38 compute-0 podman[255131]: 2025-12-04 10:46:38.962300912 +0000 UTC m=+0.063873829 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd)
Dec 04 10:46:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 8 op/s
Dec 04 10:46:39 compute-0 ceph-mon[75358]: pgmap v1108: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 107 KiB/s wr, 10 op/s
Dec 04 10:46:39 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:46:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:39 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:46:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec 04 10:46:40 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49/ef9ff5da-9de6-46b4-9a76-f26d18a22519'.
Dec 04 10:46:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49/.meta.tmp'
Dec 04 10:46:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49/.meta.tmp' to config b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49/.meta'
Dec 04 10:46:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec 04 10:46:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "format": "json"}]: dispatch
Dec 04 10:46:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec 04 10:46:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec 04 10:46:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:46:40 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:46:40 compute-0 ceph-mon[75358]: pgmap v1109: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 8 op/s
Dec 04 10:46:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:46:40 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "format": "json"}]: dispatch
Dec 04 10:46:40 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:46:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 8 op/s
Dec 04 10:46:41 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:46:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 04 10:46:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:46:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:46:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:41 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:46:41 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:46:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:46:41 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:42 compute-0 ceph-mon[75358]: pgmap v1110: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 8 op/s
Dec 04 10:46:42 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:46:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:46:42 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:43 compute-0 nova_compute[244644]: 2025-12-04 10:46:43.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:46:43 compute-0 nova_compute[244644]: 2025-12-04 10:46:43.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:46:43 compute-0 nova_compute[244644]: 2025-12-04 10:46:43.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:46:43 compute-0 nova_compute[244644]: 2025-12-04 10:46:43.358 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:46:43 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Dec 04 10:46:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec 04 10:46:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec 04 10:46:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 132 KiB/s wr, 13 op/s
Dec 04 10:46:43 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Dec 04 10:46:44 compute-0 nova_compute[244644]: 2025-12-04 10:46:44.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:46:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:44 compute-0 ceph-mon[75358]: pgmap v1111: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 132 KiB/s wr, 13 op/s
Dec 04 10:46:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 82 KiB/s wr, 8 op/s
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:46:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:46 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:46:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:46:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:46 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:46 compute-0 ceph-mon[75358]: pgmap v1112: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 82 KiB/s wr, 8 op/s
Dec 04 10:46:46 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:46:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:46 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "format": "json"}]: dispatch
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:46 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:46.943+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b3ac60cc-8acd-4ed9-b323-017b1c573a49' of type subvolume
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b3ac60cc-8acd-4ed9-b323-017b1c573a49' of type subvolume
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "force": true, "format": "json"}]: dispatch
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49'' moved to trashcan
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:46:46 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec 04 10:46:47 compute-0 nova_compute[244644]: 2025-12-04 10:46:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:46:47 compute-0 nova_compute[244644]: 2025-12-04 10:46:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:46:47 compute-0 nova_compute[244644]: 2025-12-04 10:46:47.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:46:47 compute-0 nova_compute[244644]: 2025-12-04 10:46:47.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:46:47 compute-0 nova_compute[244644]: 2025-12-04 10:46:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:46:47 compute-0 nova_compute[244644]: 2025-12-04 10:46:47.362 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:46:47 compute-0 nova_compute[244644]: 2025-12-04 10:46:47.362 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:46:47 compute-0 sudo[255153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:46:47 compute-0 sudo[255153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:47 compute-0 sudo[255153]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 120 KiB/s wr, 11 op/s
Dec 04 10:46:47 compute-0 sudo[255197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:46:47 compute-0 sudo[255197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:47 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "format": "json"}]: dispatch
Dec 04 10:46:47 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "force": true, "format": "json"}]: dispatch
Dec 04 10:46:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:46:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019975673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:46:47 compute-0 nova_compute[244644]: 2025-12-04 10:46:47.942 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.105 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.107 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5031MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.107 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.107 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:46:48 compute-0 sudo[255197]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.189 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.190 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:46:48 compute-0 sudo[255255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:46:48 compute-0 sudo[255255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:48 compute-0 sudo[255255]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.221 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:46:48 compute-0 podman[255280]: 2025-12-04 10:46:48.266688121 +0000 UTC m=+0.061496840 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec 04 10:46:48 compute-0 sudo[255292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- inventory --format=json-pretty --filter-for-batch
Dec 04 10:46:48 compute-0 sudo[255292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:48 compute-0 podman[255279]: 2025-12-04 10:46:48.322566442 +0000 UTC m=+0.119756969 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:46:48 compute-0 podman[255380]: 2025-12-04 10:46:48.569048579 +0000 UTC m=+0.041174141 container create 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:46:48 compute-0 systemd[1]: Started libpod-conmon-0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658.scope.
Dec 04 10:46:48 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:46:48 compute-0 podman[255380]: 2025-12-04 10:46:48.550578456 +0000 UTC m=+0.022704038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:46:48 compute-0 podman[255380]: 2025-12-04 10:46:48.661845906 +0000 UTC m=+0.133971488 container init 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:46:48 compute-0 podman[255380]: 2025-12-04 10:46:48.670361635 +0000 UTC m=+0.142487197 container start 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 04 10:46:48 compute-0 podman[255380]: 2025-12-04 10:46:48.67424755 +0000 UTC m=+0.146373142 container attach 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 10:46:48 compute-0 focused_hawking[255396]: 167 167
Dec 04 10:46:48 compute-0 systemd[1]: libpod-0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658.scope: Deactivated successfully.
Dec 04 10:46:48 compute-0 podman[255380]: 2025-12-04 10:46:48.677128241 +0000 UTC m=+0.149253813 container died 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-161597c1a231aa88bd3e1ed95e2a09b45dccd4b0b222fbe6d10940d894bc8122-merged.mount: Deactivated successfully.
Dec 04 10:46:48 compute-0 podman[255380]: 2025-12-04 10:46:48.720783963 +0000 UTC m=+0.192909525 container remove 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:46:48 compute-0 systemd[1]: libpod-conmon-0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658.scope: Deactivated successfully.
Dec 04 10:46:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:46:48 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/167740594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.783 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.789 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.812 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.814 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:46:48 compute-0 nova_compute[244644]: 2025-12-04 10:46:48.814 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:46:48 compute-0 ceph-mon[75358]: pgmap v1113: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 120 KiB/s wr, 11 op/s
Dec 04 10:46:48 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3019975673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:46:48 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/167740594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:46:48 compute-0 podman[255422]: 2025-12-04 10:46:48.875761824 +0000 UTC m=+0.040078784 container create a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:46:48 compute-0 systemd[1]: Started libpod-conmon-a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3.scope.
Dec 04 10:46:48 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:48 compute-0 podman[255422]: 2025-12-04 10:46:48.860468859 +0000 UTC m=+0.024785839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:48 compute-0 podman[255422]: 2025-12-04 10:46:48.970632592 +0000 UTC m=+0.134949572 container init a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:46:48 compute-0 podman[255422]: 2025-12-04 10:46:48.976984008 +0000 UTC m=+0.141300958 container start a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 04 10:46:48 compute-0 podman[255422]: 2025-12-04 10:46:48.980816622 +0000 UTC m=+0.145133612 container attach a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:49 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:46:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:49 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:46:49 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:46:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:46:49 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]: [
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:     {
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         "available": false,
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         "being_replaced": false,
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         "ceph_device_lvm": false,
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         "lsm_data": {},
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         "lvs": [],
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         "path": "/dev/sr0",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         "rejected_reasons": [
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "Has a FileSystem",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "Insufficient space (<5GB)"
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         ],
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         "sys_api": {
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "actuators": null,
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "device_nodes": [
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:                 "sr0"
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             ],
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "devname": "sr0",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "human_readable_size": "482.00 KB",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "id_bus": "ata",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "model": "QEMU DVD-ROM",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "nr_requests": "2",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "parent": "/dev/sr0",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "partitions": {},
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "path": "/dev/sr0",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "removable": "1",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "rev": "2.5+",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "ro": "0",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "rotational": "1",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "sas_address": "",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "sas_device_handle": "",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "scheduler_mode": "mq-deadline",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "sectors": 0,
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "sectorsize": "2048",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "size": 493568.0,
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "support_discard": "2048",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "type": "disk",
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:             "vendor": "QEMU"
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:         }
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]:     }
Dec 04 10:46:49 compute-0 pensive_meninsky[255438]: ]
Dec 04 10:46:49 compute-0 systemd[1]: libpod-a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3.scope: Deactivated successfully.
Dec 04 10:46:49 compute-0 podman[255422]: 2025-12-04 10:46:49.482746697 +0000 UTC m=+0.647063657 container died a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7-merged.mount: Deactivated successfully.
Dec 04 10:46:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 93 KiB/s wr, 7 op/s
Dec 04 10:46:49 compute-0 podman[255422]: 2025-12-04 10:46:49.527327071 +0000 UTC m=+0.691644041 container remove a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:46:49 compute-0 systemd[1]: libpod-conmon-a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3.scope: Deactivated successfully.
Dec 04 10:46:49 compute-0 sudo[255292]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:46:49 compute-0 sudo[256186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:46:49 compute-0 sudo[256186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:49 compute-0 sudo[256186]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:49 compute-0 sudo[256211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:46:49 compute-0 sudo[256211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:46:49 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:46:50 compute-0 podman[256248]: 2025-12-04 10:46:50.013917908 +0000 UTC m=+0.041347934 container create fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 04 10:46:50 compute-0 systemd[1]: Started libpod-conmon-fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433.scope.
Dec 04 10:46:50 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:46:50 compute-0 podman[256248]: 2025-12-04 10:46:49.997035084 +0000 UTC m=+0.024465150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:46:50 compute-0 podman[256248]: 2025-12-04 10:46:50.10812922 +0000 UTC m=+0.135559306 container init fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:46:50 compute-0 podman[256248]: 2025-12-04 10:46:50.116858205 +0000 UTC m=+0.144288271 container start fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 04 10:46:50 compute-0 podman[256248]: 2025-12-04 10:46:50.120910814 +0000 UTC m=+0.148340880 container attach fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:46:50 compute-0 recursing_ishizaka[256265]: 167 167
Dec 04 10:46:50 compute-0 systemd[1]: libpod-fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433.scope: Deactivated successfully.
Dec 04 10:46:50 compute-0 podman[256248]: 2025-12-04 10:46:50.12319915 +0000 UTC m=+0.150629186 container died fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-56cf2fb29970a64f3a11b95fe7b94b8f929b284cc2fd10e0aa9d10c5a9f7eac4-merged.mount: Deactivated successfully.
Dec 04 10:46:50 compute-0 podman[256248]: 2025-12-04 10:46:50.16883339 +0000 UTC m=+0.196263426 container remove fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 04 10:46:50 compute-0 systemd[1]: libpod-conmon-fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433.scope: Deactivated successfully.
Dec 04 10:46:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:46:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec 04 10:46:50 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf/cf716c94-9722-4ff4-9497-36c129aaac2e'.
Dec 04 10:46:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf/.meta.tmp'
Dec 04 10:46:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf/.meta.tmp' to config b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf/.meta'
Dec 04 10:46:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec 04 10:46:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "format": "json"}]: dispatch
Dec 04 10:46:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec 04 10:46:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec 04 10:46:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:46:50 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:46:50 compute-0 podman[256289]: 2025-12-04 10:46:50.358504503 +0000 UTC m=+0.056130788 container create ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:46:50 compute-0 systemd[1]: Started libpod-conmon-ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1.scope.
Dec 04 10:46:50 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:50 compute-0 podman[256289]: 2025-12-04 10:46:50.339740623 +0000 UTC m=+0.037366928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:50 compute-0 podman[256289]: 2025-12-04 10:46:50.456711882 +0000 UTC m=+0.154338177 container init ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:46:50 compute-0 podman[256289]: 2025-12-04 10:46:50.464396411 +0000 UTC m=+0.162022696 container start ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:46:50 compute-0 podman[256289]: 2025-12-04 10:46:50.468376549 +0000 UTC m=+0.166002864 container attach ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:46:50 compute-0 nova_compute[244644]: 2025-12-04 10:46:50.814 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:46:50 compute-0 nova_compute[244644]: 2025-12-04 10:46:50.815 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:46:50 compute-0 nova_compute[244644]: 2025-12-04 10:46:50.815 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:46:50 compute-0 ceph-mon[75358]: pgmap v1114: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 93 KiB/s wr, 7 op/s
Dec 04 10:46:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:46:50 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "format": "json"}]: dispatch
Dec 04 10:46:50 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:46:50 compute-0 clever_elgamal[256305]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:46:50 compute-0 clever_elgamal[256305]: --> All data devices are unavailable
Dec 04 10:46:50 compute-0 systemd[1]: libpod-ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1.scope: Deactivated successfully.
Dec 04 10:46:50 compute-0 podman[256289]: 2025-12-04 10:46:50.996597488 +0000 UTC m=+0.694223773 container died ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060-merged.mount: Deactivated successfully.
Dec 04 10:46:51 compute-0 podman[256289]: 2025-12-04 10:46:51.052138631 +0000 UTC m=+0.749764916 container remove ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:46:51 compute-0 systemd[1]: libpod-conmon-ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1.scope: Deactivated successfully.
Dec 04 10:46:51 compute-0 sudo[256211]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:51 compute-0 sudo[256338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:46:51 compute-0 sudo[256338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:51 compute-0 sudo[256338]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:51 compute-0 sudo[256363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:46:51 compute-0 sudo[256363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:51 compute-0 nova_compute[244644]: 2025-12-04 10:46:51.334 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:46:51 compute-0 nova_compute[244644]: 2025-12-04 10:46:51.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:46:51 compute-0 nova_compute[244644]: 2025-12-04 10:46:51.337 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:46:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 94 KiB/s wr, 8 op/s
Dec 04 10:46:51 compute-0 podman[256401]: 2025-12-04 10:46:51.558994587 +0000 UTC m=+0.047045696 container create 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 04 10:46:51 compute-0 systemd[1]: Started libpod-conmon-1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28.scope.
Dec 04 10:46:51 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:46:51 compute-0 podman[256401]: 2025-12-04 10:46:51.53347497 +0000 UTC m=+0.021526059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:46:51 compute-0 podman[256401]: 2025-12-04 10:46:51.645249532 +0000 UTC m=+0.133300621 container init 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 04 10:46:51 compute-0 podman[256401]: 2025-12-04 10:46:51.654206533 +0000 UTC m=+0.142257602 container start 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 04 10:46:51 compute-0 hungry_cerf[256417]: 167 167
Dec 04 10:46:51 compute-0 podman[256401]: 2025-12-04 10:46:51.658127358 +0000 UTC m=+0.146178427 container attach 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:46:51 compute-0 systemd[1]: libpod-1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28.scope: Deactivated successfully.
Dec 04 10:46:51 compute-0 podman[256401]: 2025-12-04 10:46:51.661900931 +0000 UTC m=+0.149952020 container died 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-56ec130938897b4da6ebf26143b5ee86357d3f63f1d0e17b4a100521e7655d2b-merged.mount: Deactivated successfully.
Dec 04 10:46:51 compute-0 podman[256401]: 2025-12-04 10:46:51.704919657 +0000 UTC m=+0.192970726 container remove 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:46:51 compute-0 systemd[1]: libpod-conmon-1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28.scope: Deactivated successfully.
Dec 04 10:46:51 compute-0 podman[256440]: 2025-12-04 10:46:51.872354474 +0000 UTC m=+0.030642472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:46:52 compute-0 podman[256440]: 2025-12-04 10:46:52.136458864 +0000 UTC m=+0.294746842 container create a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:46:52 compute-0 systemd[1]: Started libpod-conmon-a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa.scope.
Dec 04 10:46:52 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:52 compute-0 podman[256440]: 2025-12-04 10:46:52.282607869 +0000 UTC m=+0.440895847 container init a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:46:52 compute-0 podman[256440]: 2025-12-04 10:46:52.291764404 +0000 UTC m=+0.450052372 container start a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:46:52 compute-0 podman[256440]: 2025-12-04 10:46:52.298834828 +0000 UTC m=+0.457122856 container attach a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:46:52 compute-0 wonderful_payne[256457]: {
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:     "0": [
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:         {
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "devices": [
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "/dev/loop3"
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             ],
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_name": "ceph_lv0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_size": "21470642176",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "name": "ceph_lv0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "tags": {
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.cluster_name": "ceph",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.crush_device_class": "",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.encrypted": "0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.objectstore": "bluestore",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.osd_id": "0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.type": "block",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.vdo": "0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.with_tpm": "0"
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             },
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "type": "block",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "vg_name": "ceph_vg0"
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:         }
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:     ],
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:     "1": [
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:         {
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "devices": [
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "/dev/loop4"
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             ],
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_name": "ceph_lv1",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_size": "21470642176",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "name": "ceph_lv1",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "tags": {
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.cluster_name": "ceph",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.crush_device_class": "",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.encrypted": "0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.objectstore": "bluestore",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.osd_id": "1",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.type": "block",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.vdo": "0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.with_tpm": "0"
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             },
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "type": "block",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "vg_name": "ceph_vg1"
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:         }
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:     ],
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:     "2": [
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:         {
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "devices": [
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "/dev/loop5"
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             ],
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_name": "ceph_lv2",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_size": "21470642176",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "name": "ceph_lv2",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "tags": {
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.cluster_name": "ceph",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.crush_device_class": "",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.encrypted": "0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.objectstore": "bluestore",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.osd_id": "2",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.type": "block",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.vdo": "0",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:                 "ceph.with_tpm": "0"
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             },
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "type": "block",
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:             "vg_name": "ceph_vg2"
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:         }
Dec 04 10:46:52 compute-0 wonderful_payne[256457]:     ]
Dec 04 10:46:52 compute-0 wonderful_payne[256457]: }
Dec 04 10:46:52 compute-0 systemd[1]: libpod-a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa.scope: Deactivated successfully.
Dec 04 10:46:52 compute-0 podman[256440]: 2025-12-04 10:46:52.653339125 +0000 UTC m=+0.811627093 container died a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 04 10:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22-merged.mount: Deactivated successfully.
Dec 04 10:46:52 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:46:52 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:46:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:53 compute-0 ceph-mon[75358]: pgmap v1115: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 94 KiB/s wr, 8 op/s
Dec 04 10:46:53 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:46:53 compute-0 podman[256440]: 2025-12-04 10:46:53.105767005 +0000 UTC m=+1.264054973 container remove a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:46:53 compute-0 systemd[1]: libpod-conmon-a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa.scope: Deactivated successfully.
Dec 04 10:46:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:46:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:53 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:46:53 compute-0 sudo[256363]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:53 compute-0 sudo[256479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:46:53 compute-0 sudo[256479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:53 compute-0 sudo[256479]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:53 compute-0 sudo[256504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:46:53 compute-0 sudo[256504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 121 KiB/s wr, 11 op/s
Dec 04 10:46:53 compute-0 podman[256541]: 2025-12-04 10:46:53.632784036 +0000 UTC m=+0.045041566 container create a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:46:53 compute-0 systemd[1]: Started libpod-conmon-a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac.scope.
Dec 04 10:46:53 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:46:53 compute-0 podman[256541]: 2025-12-04 10:46:53.613570004 +0000 UTC m=+0.025827524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:46:53 compute-0 podman[256541]: 2025-12-04 10:46:53.714244134 +0000 UTC m=+0.126501614 container init a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 04 10:46:53 compute-0 podman[256541]: 2025-12-04 10:46:53.722773764 +0000 UTC m=+0.135031254 container start a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:46:53 compute-0 podman[256541]: 2025-12-04 10:46:53.726363452 +0000 UTC m=+0.138620952 container attach a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 10:46:53 compute-0 relaxed_proskuriakova[256557]: 167 167
Dec 04 10:46:53 compute-0 systemd[1]: libpod-a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac.scope: Deactivated successfully.
Dec 04 10:46:53 compute-0 podman[256541]: 2025-12-04 10:46:53.72996115 +0000 UTC m=+0.142218660 container died a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d94a08611d597ad99cac06c455de2d3f773de8ed7ae84a442e65d2640b9c8c05-merged.mount: Deactivated successfully.
Dec 04 10:46:53 compute-0 podman[256541]: 2025-12-04 10:46:53.777632839 +0000 UTC m=+0.189890349 container remove a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:46:53 compute-0 systemd[1]: libpod-conmon-a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac.scope: Deactivated successfully.
Dec 04 10:46:53 compute-0 podman[256581]: 2025-12-04 10:46:53.955740419 +0000 UTC m=+0.048936252 container create 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:46:53 compute-0 systemd[1]: Started libpod-conmon-61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e.scope.
Dec 04 10:46:54 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:54 compute-0 podman[256581]: 2025-12-04 10:46:53.933936105 +0000 UTC m=+0.027131988 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:46:54 compute-0 podman[256581]: 2025-12-04 10:46:54.0446084 +0000 UTC m=+0.137804273 container init 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:46:54 compute-0 podman[256581]: 2025-12-04 10:46:54.051743565 +0000 UTC m=+0.144939398 container start 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 04 10:46:54 compute-0 podman[256581]: 2025-12-04 10:46:54.055906207 +0000 UTC m=+0.149102070 container attach 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 04 10:46:54 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:46:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:46:54 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:46:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:54 compute-0 lvm[256676]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:46:54 compute-0 lvm[256677]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:46:54 compute-0 lvm[256677]: VG ceph_vg1 finished
Dec 04 10:46:54 compute-0 lvm[256676]: VG ceph_vg0 finished
Dec 04 10:46:54 compute-0 lvm[256679]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:46:54 compute-0 lvm[256679]: VG ceph_vg2 finished
Dec 04 10:46:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:46:54.914 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:46:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:46:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:46:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:46:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:46:54 compute-0 stoic_shockley[256598]: {}
Dec 04 10:46:55 compute-0 systemd[1]: libpod-61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e.scope: Deactivated successfully.
Dec 04 10:46:55 compute-0 systemd[1]: libpod-61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e.scope: Consumed 1.528s CPU time.
Dec 04 10:46:55 compute-0 podman[256581]: 2025-12-04 10:46:55.018891344 +0000 UTC m=+1.112087237 container died 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 04 10:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646-merged.mount: Deactivated successfully.
Dec 04 10:46:55 compute-0 podman[256581]: 2025-12-04 10:46:55.077267265 +0000 UTC m=+1.170463138 container remove 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 04 10:46:55 compute-0 systemd[1]: libpod-conmon-61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e.scope: Deactivated successfully.
Dec 04 10:46:55 compute-0 sudo[256504]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:46:55 compute-0 ceph-mon[75358]: pgmap v1116: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 121 KiB/s wr, 11 op/s
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "format": "json"}]: dispatch
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf' of type subvolume
Dec 04 10:46:55 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:55.163+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf' of type subvolume
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "force": true, "format": "json"}]: dispatch
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec 04 10:46:55 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf'' moved to trashcan
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec 04 10:46:55 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:55 compute-0 sudo[256694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:46:55 compute-0 sudo[256694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:46:55 compute-0 sudo[256694]: pam_unix(sudo:session): session closed for user root
Dec 04 10:46:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 66 KiB/s wr, 6 op/s
Dec 04 10:46:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "format": "json"}]: dispatch
Dec 04 10:46:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "force": true, "format": "json"}]: dispatch
Dec 04 10:46:56 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:56 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:46:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:56 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:46:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 04 10:46:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:46:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:46:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:46:57 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:46:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:46:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:46:57 compute-0 ceph-mon[75358]: pgmap v1117: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 66 KiB/s wr, 6 op/s
Dec 04 10:46:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:46:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:46:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:46:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 124 KiB/s wr, 86 op/s
Dec 04 10:46:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:46:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:46:58 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:58 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "force": true, "format": "json"}]: dispatch
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043'' moved to trashcan
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:46:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:46:59 compute-0 ceph-mon[75358]: pgmap v1118: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 124 KiB/s wr, 86 op/s
Dec 04 10:46:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:46:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 86 KiB/s wr, 83 op/s
Dec 04 10:47:00 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec 04 10:47:00 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:47:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:47:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:47:00 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:47:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:47:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:47:00 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:47:00 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:01 compute-0 ceph-mon[75358]: pgmap v1119: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 86 KiB/s wr, 83 op/s
Dec 04 10:47:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:47:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:47:01 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 86 KiB/s wr, 84 op/s
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2_2d147d3e-2b60-4d32-b534-bde0f2f0f206", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2_2d147d3e-2b60-4d32-b534-bde0f2f0f206, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2_2d147d3e-2b60-4d32-b534-bde0f2f0f206, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec 04 10:47:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:47:02 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:47:03 compute-0 ceph-mon[75358]: pgmap v1120: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 86 KiB/s wr, 84 op/s
Dec 04 10:47:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2_2d147d3e-2b60-4d32-b534-bde0f2f0f206", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 132 KiB/s wr, 88 op/s
Dec 04 10:47:03 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:47:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 04 10:47:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:47:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 04 10:47:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:47:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:47:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:47:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:47:04 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:47:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:47:04 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:04 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 04 10:47:04 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 04 10:47:04 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 04 10:47:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "format": "json"}]: dispatch
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:47:05 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:47:05.286+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '134bada8-f9d1-4734-8cb9-4d8f094ffc02' of type subvolume
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '134bada8-f9d1-4734-8cb9-4d8f094ffc02' of type subvolume
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02'' moved to trashcan
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec 04 10:47:05 compute-0 ceph-mon[75358]: pgmap v1121: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 132 KiB/s wr, 88 op/s
Dec 04 10:47:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:47:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 04 10:47:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 104 KiB/s wr, 84 op/s
Dec 04 10:47:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Dec 04 10:47:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "format": "json"}]: dispatch
Dec 04 10:47:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Dec 04 10:47:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Dec 04 10:47:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:47:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:47:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:47:07 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:47:07 compute-0 ceph-mon[75358]: pgmap v1122: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 104 KiB/s wr, 84 op/s
Dec 04 10:47:07 compute-0 ceph-mon[75358]: osdmap e153: 3 total, 3 up, 3 in
Dec 04 10:47:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:47:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:47:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:47:07 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:47:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 104 KiB/s wr, 10 op/s
Dec 04 10:47:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:47:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:47:08 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:47:09 compute-0 ceph-mon[75358]: pgmap v1124: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 104 KiB/s wr, 10 op/s
Dec 04 10:47:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 104 KiB/s wr, 10 op/s
Dec 04 10:47:09 compute-0 podman[256722]: 2025-12-04 10:47:09.955469525 +0000 UTC m=+0.061162332 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 04 10:47:09 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:47:09.957 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:47:09 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:47:09.959 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:47:11 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:47:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:47:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:47:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 04 10:47:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:47:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:47:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:11 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:47:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:47:11 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:47:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:47:11 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:11 compute-0 ceph-mon[75358]: pgmap v1125: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 104 KiB/s wr, 10 op/s
Dec 04 10:47:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:47:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:47:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:47:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:47:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4128545697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:47:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:47:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4128545697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:47:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 104 KiB/s wr, 10 op/s
Dec 04 10:47:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:47:12 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:47:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/4128545697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:47:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/4128545697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:47:13 compute-0 ceph-mon[75358]: pgmap v1126: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 104 KiB/s wr, 10 op/s
Dec 04 10:47:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 100 KiB/s wr, 9 op/s
Dec 04 10:47:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Dec 04 10:47:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Dec 04 10:47:14 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Dec 04 10:47:14 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:47:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:47:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:47:14 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:47:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:47:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:47:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:47:14 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:15 compute-0 ceph-mon[75358]: pgmap v1127: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 100 KiB/s wr, 9 op/s
Dec 04 10:47:15 compute-0 ceph-mon[75358]: osdmap e154: 3 total, 3 up, 3 in
Dec 04 10:47:15 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:47:15 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:47:15 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:47:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 447 B/s rd, 109 KiB/s wr, 9 op/s
Dec 04 10:47:16 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec 04 10:47:17 compute-0 ceph-mon[75358]: pgmap v1129: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 447 B/s rd, 109 KiB/s wr, 9 op/s
Dec 04 10:47:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 80 KiB/s wr, 6 op/s
Dec 04 10:47:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:17 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/13d4aaa8-c75f-4995-b55e-e3eaac7e47b3'.
Dec 04 10:47:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp'
Dec 04 10:47:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp' to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta'
Dec 04 10:47:17 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "format": "json"}]: dispatch
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:47:18 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 04 10:47:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:47:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 04 10:47:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:47:18 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:47:18 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:47:18 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:18 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 04 10:47:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 04 10:47:18 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 04 10:47:18 compute-0 podman[256746]: 2025-12-04 10:47:18.947589721 +0000 UTC m=+0.046213655 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 04 10:47:18 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:47:18.961 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:47:18 compute-0 podman[256745]: 2025-12-04 10:47:18.987242913 +0000 UTC m=+0.087918618 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 04 10:47:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:19 compute-0 ceph-mon[75358]: pgmap v1130: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 80 KiB/s wr, 6 op/s
Dec 04 10:47:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "format": "json"}]: dispatch
Dec 04 10:47:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:47:19 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 04 10:47:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 80 KiB/s wr, 6 op/s
Dec 04 10:47:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "snap_name": "04fc09fb-6351-40d6-a158-b6c8dd071066", "format": "json"}]: dispatch
Dec 04 10:47:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:21 compute-0 ceph-mon[75358]: pgmap v1131: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 80 KiB/s wr, 6 op/s
Dec 04 10:47:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 80 KiB/s wr, 6 op/s
Dec 04 10:47:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:47:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 04 10:47:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:21 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID bob with tenant 7df6681d57a74b90abc5310588588b91
Dec 04 10:47:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec 04 10:47:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:47:21 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:47:21 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:22 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "snap_name": "04fc09fb-6351-40d6-a158-b6c8dd071066", "format": "json"}]: dispatch
Dec 04 10:47:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec 04 10:47:22 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec 04 10:47:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 91 KiB/s wr, 7 op/s
Dec 04 10:47:23 compute-0 ceph-mon[75358]: pgmap v1132: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 80 KiB/s wr, 6 op/s
Dec 04 10:47:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:47:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:24 compute-0 ceph-mon[75358]: pgmap v1133: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 91 KiB/s wr, 7 op/s
Dec 04 10:47:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "snap_name": "04fc09fb-6351-40d6-a158-b6c8dd071066_2a65533f-01dd-4708-9c72-21da27bce3f8", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066_2a65533f-01dd-4708-9c72-21da27bce3f8, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp'
Dec 04 10:47:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp' to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta'
Dec 04 10:47:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066_2a65533f-01dd-4708-9c72-21da27bce3f8, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:25 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "snap_name": "04fc09fb-6351-40d6-a158-b6c8dd071066", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp'
Dec 04 10:47:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp' to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta'
Dec 04 10:47:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 92 B/s rd, 82 KiB/s wr, 7 op/s
Dec 04 10:47:25 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "snap_name": "04fc09fb-6351-40d6-a158-b6c8dd071066_2a65533f-01dd-4708-9c72-21da27bce3f8", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:25 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "snap_name": "04fc09fb-6351-40d6-a158-b6c8dd071066", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77'.
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/.meta.tmp'
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/.meta.tmp' to config b'/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/.meta'
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "format": "json"}]: dispatch
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec 04 10:47:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:47:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:26 compute-0 sshd-session[256789]: Invalid user azureuser from 101.47.163.20 port 34372
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:47:26
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control', 'vms', 'backups', '.rgw.root']
Dec 04 10:47:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:47:26 compute-0 ceph-mon[75358]: pgmap v1134: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 92 B/s rd, 82 KiB/s wr, 7 op/s
Dec 04 10:47:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:26 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "format": "json"}]: dispatch
Dec 04 10:47:26 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:27 compute-0 sshd-session[256789]: Received disconnect from 101.47.163.20 port 34372:11: Bye Bye [preauth]
Dec 04 10:47:27 compute-0 sshd-session[256789]: Disconnected from invalid user azureuser 101.47.163.20 port 34372 [preauth]
Dec 04 10:47:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 98 KiB/s wr, 9 op/s
Dec 04 10:47:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:47:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8417ad2040>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8413aa5580>)]
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "997ad407-3986-4029-acca-2f53511b4ff3", "format": "json"}]: dispatch
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:997ad407-3986-4029-acca-2f53511b4ff3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:997ad407-3986-4029-acca-2f53511b4ff3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '997ad407-3986-4029-acca-2f53511b4ff3' of type subvolume
Dec 04 10:47:28 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:47:28.443+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '997ad407-3986-4029-acca-2f53511b4ff3' of type subvolume
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3'' moved to trashcan
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8417af2580>)]
Dec 04 10:47:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:47:28 compute-0 ceph-mon[75358]: pgmap v1135: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 98 KiB/s wr, 9 op/s
Dec 04 10:47:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "997ad407-3986-4029-acca-2f53511b4ff3", "format": "json"}]: dispatch
Dec 04 10:47:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 75 KiB/s wr, 7 op/s
Dec 04 10:47:29 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "auth_id": "bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:47:29 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:e36f2012-530d-4132-9482-586618cf68e8, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 04 10:47:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]} v 0)
Dec 04 10:47:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]} : dispatch
Dec 04 10:47:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]}]': finished
Dec 04 10:47:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 04 10:47:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:29 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:e36f2012-530d-4132-9482-586618cf68e8, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec 04 10:47:29 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.iwufnj(active, since 33m)
Dec 04 10:47:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]} : dispatch
Dec 04 10:47:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]}]': finished
Dec 04 10:47:29 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Dec 04 10:47:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Dec 04 10:47:30 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Dec 04 10:47:30 compute-0 ceph-mon[75358]: pgmap v1136: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 75 KiB/s wr, 7 op/s
Dec 04 10:47:30 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "auth_id": "bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec 04 10:47:30 compute-0 ceph-mon[75358]: mgrmap e15: compute-0.iwufnj(active, since 33m)
Dec 04 10:47:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 90 KiB/s wr, 8 op/s
Dec 04 10:47:31 compute-0 ceph-mon[75358]: osdmap e155: 3 total, 3 up, 3 in
Dec 04 10:47:32 compute-0 sshd-session[256791]: Connection reset by 198.235.24.112 port 63290 [preauth]
Dec 04 10:47:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:32 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/d0dd193d-277f-49c2-89da-22c500b1172f'.
Dec 04 10:47:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp'
Dec 04 10:47:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp' to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta'
Dec 04 10:47:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "format": "json"}]: dispatch
Dec 04 10:47:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:47:32 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:32 compute-0 ceph-mon[75358]: pgmap v1138: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 90 KiB/s wr, 8 op/s
Dec 04 10:47:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "format": "json"}]: dispatch
Dec 04 10:47:32 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:33 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "auth_id": "bob", "format": "json"}]: dispatch
Dec 04 10:47:33 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec 04 10:47:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 04 10:47:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]} v 0)
Dec 04 10:47:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]} : dispatch
Dec 04 10:47:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]}]': finished
Dec 04 10:47:33 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 04 10:47:33 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec 04 10:47:33 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "auth_id": "bob", "format": "json"}]: dispatch
Dec 04 10:47:33 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec 04 10:47:33 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77
Dec 04 10:47:33 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77],prefix=session evict} (starting...)
Dec 04 10:47:33 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:47:33 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec 04 10:47:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 8 op/s
Dec 04 10:47:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "auth_id": "bob", "format": "json"}]: dispatch
Dec 04 10:47:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]} : dispatch
Dec 04 10:47:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]}]': finished
Dec 04 10:47:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "auth_id": "bob", "format": "json"}]: dispatch
Dec 04 10:47:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Dec 04 10:47:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Dec 04 10:47:34 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Dec 04 10:47:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 70 KiB/s wr, 6 op/s
Dec 04 10:47:35 compute-0 ceph-mon[75358]: pgmap v1139: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 8 op/s
Dec 04 10:47:35 compute-0 ceph-mon[75358]: osdmap e156: 3 total, 3 up, 3 in
Dec 04 10:47:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "snap_name": "42c259ed-af7d-41af-a5f2-bcfbeccb5eab", "format": "json"}]: dispatch
Dec 04 10:47:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:36 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "bob", "format": "json"}]: dispatch
Dec 04 10:47:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 04 10:47:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0)
Dec 04 10:47:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec 04 10:47:36 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Dec 04 10:47:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:36 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "bob", "format": "json"}]: dispatch
Dec 04 10:47:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec 04 10:47:36 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec 04 10:47:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 04 10:47:36 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:37 compute-0 ceph-mon[75358]: pgmap v1141: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 70 KiB/s wr, 6 op/s
Dec 04 10:47:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "snap_name": "42c259ed-af7d-41af-a5f2-bcfbeccb5eab", "format": "json"}]: dispatch
Dec 04 10:47:37 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "bob", "format": "json"}]: dispatch
Dec 04 10:47:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 04 10:47:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec 04 10:47:37 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666797954154184 of space, bias 1.0, pg target 0.20000393862462554 quantized to 32 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000475949403355812 of space, bias 4.0, pg target 0.5711392840269744 quantized to 16 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:47:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 148 KiB/s wr, 13 op/s
Dec 04 10:47:38 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "bob", "format": "json"}]: dispatch
Dec 04 10:47:39 compute-0 ceph-mon[75358]: pgmap v1142: 321 pgs: 321 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 148 KiB/s wr, 13 op/s
Dec 04 10:47:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 474 B/s rd, 137 KiB/s wr, 12 op/s
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "snap_name": "42c259ed-af7d-41af-a5f2-bcfbeccb5eab_446616cd-30c9-420e-848d-bee94a3551ec", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab_446616cd-30c9-420e-848d-bee94a3551ec, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp'
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp' to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta'
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab_446616cd-30c9-420e-848d-bee94a3551ec, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "snap_name": "42c259ed-af7d-41af-a5f2-bcfbeccb5eab", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp'
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp' to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta'
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1de31656-5fa1-4344-818a-900ef388b939", "format": "json"}]: dispatch
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1de31656-5fa1-4344-818a-900ef388b939, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:47:40 compute-0 podman[256796]: 2025-12-04 10:47:40.960269311 +0000 UTC m=+0.061623142 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1de31656-5fa1-4344-818a-900ef388b939, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1de31656-5fa1-4344-818a-900ef388b939' of type subvolume
Dec 04 10:47:40 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:47:40.961+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1de31656-5fa1-4344-818a-900ef388b939' of type subvolume
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939'' moved to trashcan
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:47:40 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec 04 10:47:41 compute-0 ceph-mon[75358]: pgmap v1143: 321 pgs: 321 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 474 B/s rd, 137 KiB/s wr, 12 op/s
Dec 04 10:47:41 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "snap_name": "42c259ed-af7d-41af-a5f2-bcfbeccb5eab_446616cd-30c9-420e-848d-bee94a3551ec", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:41 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "snap_name": "42c259ed-af7d-41af-a5f2-bcfbeccb5eab", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.072192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261072314, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1719, "num_deletes": 253, "total_data_size": 2210906, "memory_usage": 2244944, "flush_reason": "Manual Compaction"}
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261090750, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2173089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24536, "largest_seqno": 26254, "table_properties": {"data_size": 2165466, "index_size": 4245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18866, "raw_average_key_size": 21, "raw_value_size": 2148958, "raw_average_value_size": 2393, "num_data_blocks": 189, "num_entries": 898, "num_filter_entries": 898, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845154, "oldest_key_time": 1764845154, "file_creation_time": 1764845261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18612 microseconds, and 10921 cpu microseconds.
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.090816) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2173089 bytes OK
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.090853) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.092882) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.092903) EVENT_LOG_v1 {"time_micros": 1764845261092895, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.092926) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2202916, prev total WAL file size 2202916, number of live WAL files 2.
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.094116) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2122KB)], [56(9281KB)]
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261094147, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 11677006, "oldest_snapshot_seqno": -1}
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5638 keys, 9974100 bytes, temperature: kUnknown
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261157589, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 9974100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9933099, "index_size": 25787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14149, "raw_key_size": 140410, "raw_average_key_size": 24, "raw_value_size": 9828761, "raw_average_value_size": 1743, "num_data_blocks": 1071, "num_entries": 5638, "num_filter_entries": 5638, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.157910) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 9974100 bytes
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.159303) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.8 rd, 157.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 6166, records dropped: 528 output_compression: NoCompression
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.159324) EVENT_LOG_v1 {"time_micros": 1764845261159313, "job": 30, "event": "compaction_finished", "compaction_time_micros": 63536, "compaction_time_cpu_micros": 21544, "output_level": 6, "num_output_files": 1, "total_output_size": 9974100, "num_input_records": 6166, "num_output_records": 5638, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261159888, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261162073, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.094007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:47:41 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:47:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 118 KiB/s wr, 10 op/s
Dec 04 10:47:42 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1de31656-5fa1-4344-818a-900ef388b939", "format": "json"}]: dispatch
Dec 04 10:47:42 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:43 compute-0 ceph-mon[75358]: pgmap v1144: 321 pgs: 321 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 118 KiB/s wr, 10 op/s
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s wr, 7 op/s
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "format": "json"}]: dispatch
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:47:43 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:47:43.824+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '002e05aa-0dc4-4f1b-ba53-39cac0015b96' of type subvolume
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '002e05aa-0dc4-4f1b-ba53-39cac0015b96' of type subvolume
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96'' moved to trashcan
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:47:43 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec 04 10:47:44 compute-0 nova_compute[244644]: 2025-12-04 10:47:44.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:45 compute-0 ceph-mon[75358]: pgmap v1145: 321 pgs: 321 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s wr, 7 op/s
Dec 04 10:47:45 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "format": "json"}]: dispatch
Dec 04 10:47:45 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "force": true, "format": "json"}]: dispatch
Dec 04 10:47:45 compute-0 nova_compute[244644]: 2025-12-04 10:47:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:45 compute-0 nova_compute[244644]: 2025-12-04 10:47:45.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:47:45 compute-0 nova_compute[244644]: 2025-12-04 10:47:45.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:47:45 compute-0 nova_compute[244644]: 2025-12-04 10:47:45.376 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:47:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s wr, 6 op/s
Dec 04 10:47:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Dec 04 10:47:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Dec 04 10:47:46 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Dec 04 10:47:47 compute-0 ceph-mon[75358]: pgmap v1146: 321 pgs: 321 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s wr, 6 op/s
Dec 04 10:47:47 compute-0 ceph-mon[75358]: osdmap e157: 3 total, 3 up, 3 in
Dec 04 10:47:47 compute-0 nova_compute[244644]: 2025-12-04 10:47:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:47 compute-0 nova_compute[244644]: 2025-12-04 10:47:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:47 compute-0 nova_compute[244644]: 2025-12-04 10:47:47.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:47:47 compute-0 nova_compute[244644]: 2025-12-04 10:47:47.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:47:47 compute-0 nova_compute[244644]: 2025-12-04 10:47:47.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:47:47 compute-0 nova_compute[244644]: 2025-12-04 10:47:47.394 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:47:47 compute-0 nova_compute[244644]: 2025-12-04 10:47:47.394 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:47:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 65 KiB/s wr, 6 op/s
Dec 04 10:47:47 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:47:47 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449490454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:47:47 compute-0 nova_compute[244644]: 2025-12-04 10:47:47.946 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.127 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.129 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5038MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.129 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.130 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:47:48 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/449490454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.199 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.199 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.234 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:47:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:47:48 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820531978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.797 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.803 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.817 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.819 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:47:48 compute-0 nova_compute[244644]: 2025-12-04 10:47:48.819 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:47:49 compute-0 ceph-mon[75358]: pgmap v1148: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 65 KiB/s wr, 6 op/s
Dec 04 10:47:49 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2820531978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:47:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 65 KiB/s wr, 6 op/s
Dec 04 10:47:49 compute-0 nova_compute[244644]: 2025-12-04 10:47:49.814 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:49 compute-0 nova_compute[244644]: 2025-12-04 10:47:49.883 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:49 compute-0 podman[256862]: 2025-12-04 10:47:49.944975989 +0000 UTC m=+0.049574808 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:47:50 compute-0 podman[256861]: 2025-12-04 10:47:50.005028443 +0000 UTC m=+0.112533483 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 04 10:47:51 compute-0 ceph-mon[75358]: pgmap v1149: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 65 KiB/s wr, 6 op/s
Dec 04 10:47:51 compute-0 nova_compute[244644]: 2025-12-04 10:47:51.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:51 compute-0 nova_compute[244644]: 2025-12-04 10:47:51.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:51 compute-0 nova_compute[244644]: 2025-12-04 10:47:51.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:47:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 65 KiB/s wr, 6 op/s
Dec 04 10:47:52 compute-0 nova_compute[244644]: 2025-12-04 10:47:52.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:53 compute-0 ceph-mon[75358]: pgmap v1150: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 65 KiB/s wr, 6 op/s
Dec 04 10:47:53 compute-0 nova_compute[244644]: 2025-12-04 10:47:53.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 39 KiB/s wr, 4 op/s
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/8a0ffa48-f0a7-4f73-a336-ef0dc6937c97'.
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp'
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp' to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta'
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "format": "json"}]: dispatch
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:47:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:47:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:47:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Dec 04 10:47:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Dec 04 10:47:54 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Dec 04 10:47:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:47:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:47:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:47:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:47:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:47:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:47:55 compute-0 sudo[256905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:47:55 compute-0 sudo[256905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:47:55 compute-0 sudo[256905]: pam_unix(sudo:session): session closed for user root
Dec 04 10:47:55 compute-0 sudo[256930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 04 10:47:55 compute-0 sudo[256930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:47:55 compute-0 ceph-mon[75358]: pgmap v1151: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 39 KiB/s wr, 4 op/s
Dec 04 10:47:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "format": "json"}]: dispatch
Dec 04 10:47:55 compute-0 ceph-mon[75358]: osdmap e158: 3 total, 3 up, 3 in
Dec 04 10:47:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 42 KiB/s wr, 4 op/s
Dec 04 10:47:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:55 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:47:55 compute-0 sudo[256930]: pam_unix(sudo:session): session closed for user root
Dec 04 10:47:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:47:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:47:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:47:56 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/1fd4480b-ac42-4524-a420-91fd304b251c'.
Dec 04 10:47:56 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:47:56 compute-0 sudo[256975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:47:56 compute-0 sudo[256975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:47:56 compute-0 sudo[256975]: pam_unix(sudo:session): session closed for user root
Dec 04 10:47:56 compute-0 sudo[257000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:47:56 compute-0 sudo[257000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp'
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp' to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta'
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "format": "json"}]: dispatch
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:47:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:47:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:57 compute-0 sudo[257000]: pam_unix(sudo:session): session closed for user root
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "snap_name": "6d026511-3379-4035-832a-6cafed93d0e8", "format": "json"}]: dispatch
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6d026511-3379-4035-832a-6cafed93d0e8, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 2 op/s
Dec 04 10:47:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 04 10:47:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:47:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:47:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:47:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:47:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:47:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:47:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:47:57 compute-0 ceph-mon[75358]: pgmap v1153: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 42 KiB/s wr, 4 op/s
Dec 04 10:47:57 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:47:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:47:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:47:57 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:47:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6d026511-3379-4035-832a-6cafed93d0e8, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:47:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:47:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:47:58 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:47:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:47:58 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:47:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:47:58 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:47:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:47:58 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:47:58 compute-0 sudo[257056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:47:58 compute-0 sudo[257056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:47:58 compute-0 sudo[257056]: pam_unix(sudo:session): session closed for user root
Dec 04 10:47:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:47:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:47:58 compute-0 sudo[257081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:47:58 compute-0 sudo[257081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:47:58 compute-0 podman[257118]: 2025-12-04 10:47:58.896518431 +0000 UTC m=+0.038576318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:47:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "snap_name": "3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030", "format": "json"}]: dispatch
Dec 04 10:47:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:47:59 compute-0 podman[257118]: 2025-12-04 10:47:59.388526971 +0000 UTC m=+0.530584888 container create 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:47:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "format": "json"}]: dispatch
Dec 04 10:47:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "snap_name": "6d026511-3379-4035-832a-6cafed93d0e8", "format": "json"}]: dispatch
Dec 04 10:47:59 compute-0 ceph-mon[75358]: pgmap v1154: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 2 op/s
Dec 04 10:47:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:47:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:47:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:47:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:47:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:47:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:47:59 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:47:59 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:47:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:47:59 compute-0 systemd[1]: Started libpod-conmon-492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43.scope.
Dec 04 10:47:59 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:47:59 compute-0 podman[257118]: 2025-12-04 10:47:59.536122113 +0000 UTC m=+0.678180020 container init 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:47:59 compute-0 podman[257118]: 2025-12-04 10:47:59.544166921 +0000 UTC m=+0.686224808 container start 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 10:47:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 2 op/s
Dec 04 10:47:59 compute-0 affectionate_shirley[257134]: 167 167
Dec 04 10:47:59 compute-0 systemd[1]: libpod-492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43.scope: Deactivated successfully.
Dec 04 10:47:59 compute-0 podman[257118]: 2025-12-04 10:47:59.550454955 +0000 UTC m=+0.692512862 container attach 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:47:59 compute-0 podman[257118]: 2025-12-04 10:47:59.551548732 +0000 UTC m=+0.693606619 container died 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 10:47:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ea16818539dbbce0cd15192b9ab2fbda13a4bd23525e5480aa2e9bde5175053-merged.mount: Deactivated successfully.
Dec 04 10:47:59 compute-0 podman[257118]: 2025-12-04 10:47:59.602776598 +0000 UTC m=+0.744834485 container remove 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:47:59 compute-0 systemd[1]: libpod-conmon-492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43.scope: Deactivated successfully.
Dec 04 10:47:59 compute-0 podman[257157]: 2025-12-04 10:47:59.788120076 +0000 UTC m=+0.048207734 container create f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:47:59 compute-0 systemd[1]: Started libpod-conmon-f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f.scope.
Dec 04 10:47:59 compute-0 podman[257157]: 2025-12-04 10:47:59.769032847 +0000 UTC m=+0.029120525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:47:59 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:47:59 compute-0 podman[257157]: 2025-12-04 10:47:59.886471369 +0000 UTC m=+0.146559027 container init f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:47:59 compute-0 podman[257157]: 2025-12-04 10:47:59.895912901 +0000 UTC m=+0.156000559 container start f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:47:59 compute-0 podman[257157]: 2025-12-04 10:47:59.901451086 +0000 UTC m=+0.161538754 container attach f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:48:00 compute-0 hungry_poincare[257173]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:48:00 compute-0 hungry_poincare[257173]: --> All data devices are unavailable
Dec 04 10:48:00 compute-0 systemd[1]: libpod-f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f.scope: Deactivated successfully.
Dec 04 10:48:00 compute-0 podman[257157]: 2025-12-04 10:48:00.408935567 +0000 UTC m=+0.669023215 container died f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 04 10:48:01 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "snap_name": "3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030", "format": "json"}]: dispatch
Dec 04 10:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e-merged.mount: Deactivated successfully.
Dec 04 10:48:01 compute-0 podman[257157]: 2025-12-04 10:48:01.349584456 +0000 UTC m=+1.609672114 container remove f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:48:01 compute-0 systemd[1]: libpod-conmon-f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f.scope: Deactivated successfully.
Dec 04 10:48:01 compute-0 sudo[257081]: pam_unix(sudo:session): session closed for user root
Dec 04 10:48:01 compute-0 sudo[257206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:48:01 compute-0 sudo[257206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:48:01 compute-0 sudo[257206]: pam_unix(sudo:session): session closed for user root
Dec 04 10:48:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 2 op/s
Dec 04 10:48:01 compute-0 sudo[257231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:48:01 compute-0 sudo[257231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:48:01 compute-0 podman[257268]: 2025-12-04 10:48:01.891405059 +0000 UTC m=+0.049928096 container create cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec 04 10:48:01 compute-0 systemd[1]: Started libpod-conmon-cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed.scope.
Dec 04 10:48:01 compute-0 podman[257268]: 2025-12-04 10:48:01.869912932 +0000 UTC m=+0.028435989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:48:01 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:48:01 compute-0 podman[257268]: 2025-12-04 10:48:01.985091748 +0000 UTC m=+0.143614805 container init cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:48:01 compute-0 podman[257268]: 2025-12-04 10:48:01.993363351 +0000 UTC m=+0.151886388 container start cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:48:01 compute-0 podman[257268]: 2025-12-04 10:48:01.998502886 +0000 UTC m=+0.157026093 container attach cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:48:02 compute-0 trusting_northcutt[257284]: 167 167
Dec 04 10:48:02 compute-0 systemd[1]: libpod-cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed.scope: Deactivated successfully.
Dec 04 10:48:02 compute-0 conmon[257284]: conmon cb97052c31d0a35b0983 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed.scope/container/memory.events
Dec 04 10:48:02 compute-0 podman[257268]: 2025-12-04 10:48:02.003619652 +0000 UTC m=+0.162142689 container died cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ae1e244d3218490f8b66c33ae669b365ec570ce38f5b238859dd70f14ce93b3-merged.mount: Deactivated successfully.
Dec 04 10:48:02 compute-0 podman[257268]: 2025-12-04 10:48:02.043488111 +0000 UTC m=+0.202011148 container remove cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 04 10:48:02 compute-0 systemd[1]: libpod-conmon-cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed.scope: Deactivated successfully.
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "snap_name": "6d026511-3379-4035-832a-6cafed93d0e8_33df231b-c8c4-45b8-9a3d-95830eea1273", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6d026511-3379-4035-832a-6cafed93d0e8_33df231b-c8c4-45b8-9a3d-95830eea1273, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp'
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp' to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta'
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6d026511-3379-4035-832a-6cafed93d0e8_33df231b-c8c4-45b8-9a3d-95830eea1273, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "snap_name": "6d026511-3379-4035-832a-6cafed93d0e8", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6d026511-3379-4035-832a-6cafed93d0e8, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp'
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp' to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta'
Dec 04 10:48:02 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6d026511-3379-4035-832a-6cafed93d0e8, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:48:02 compute-0 ceph-mon[75358]: pgmap v1155: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 2 op/s
Dec 04 10:48:02 compute-0 podman[257308]: 2025-12-04 10:48:02.222115043 +0000 UTC m=+0.047281041 container create 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:48:02 compute-0 systemd[1]: Started libpod-conmon-6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862.scope.
Dec 04 10:48:02 compute-0 podman[257308]: 2025-12-04 10:48:02.201447426 +0000 UTC m=+0.026613444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:48:02 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:48:02 compute-0 podman[257308]: 2025-12-04 10:48:02.325465589 +0000 UTC m=+0.150631607 container init 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:48:02 compute-0 podman[257308]: 2025-12-04 10:48:02.335243459 +0000 UTC m=+0.160409457 container start 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:48:02 compute-0 podman[257308]: 2025-12-04 10:48:02.338389516 +0000 UTC m=+0.163555534 container attach 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]: {
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:     "0": [
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:         {
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "devices": [
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "/dev/loop3"
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             ],
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_name": "ceph_lv0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_size": "21470642176",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "name": "ceph_lv0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "tags": {
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.cluster_name": "ceph",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.crush_device_class": "",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.encrypted": "0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.objectstore": "bluestore",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.osd_id": "0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.type": "block",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.vdo": "0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.with_tpm": "0"
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             },
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "type": "block",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "vg_name": "ceph_vg0"
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:         }
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:     ],
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:     "1": [
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:         {
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "devices": [
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "/dev/loop4"
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             ],
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_name": "ceph_lv1",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_size": "21470642176",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "name": "ceph_lv1",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "tags": {
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.cluster_name": "ceph",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.crush_device_class": "",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.encrypted": "0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.objectstore": "bluestore",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.osd_id": "1",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.type": "block",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.vdo": "0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.with_tpm": "0"
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             },
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "type": "block",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "vg_name": "ceph_vg1"
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:         }
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:     ],
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:     "2": [
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:         {
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "devices": [
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "/dev/loop5"
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             ],
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_name": "ceph_lv2",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_size": "21470642176",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "name": "ceph_lv2",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "tags": {
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.cluster_name": "ceph",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.crush_device_class": "",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.encrypted": "0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.objectstore": "bluestore",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.osd_id": "2",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.type": "block",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.vdo": "0",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:                 "ceph.with_tpm": "0"
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             },
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "type": "block",
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:             "vg_name": "ceph_vg2"
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:         }
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]:     ]
Dec 04 10:48:02 compute-0 upbeat_hellman[257325]: }
Dec 04 10:48:02 compute-0 systemd[1]: libpod-6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862.scope: Deactivated successfully.
Dec 04 10:48:02 compute-0 podman[257308]: 2025-12-04 10:48:02.664553328 +0000 UTC m=+0.489719326 container died 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491-merged.mount: Deactivated successfully.
Dec 04 10:48:02 compute-0 podman[257308]: 2025-12-04 10:48:02.70862975 +0000 UTC m=+0.533795748 container remove 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:48:02 compute-0 systemd[1]: libpod-conmon-6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862.scope: Deactivated successfully.
Dec 04 10:48:02 compute-0 sudo[257231]: pam_unix(sudo:session): session closed for user root
Dec 04 10:48:02 compute-0 sudo[257347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:48:02 compute-0 sudo[257347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:48:02 compute-0 sudo[257347]: pam_unix(sudo:session): session closed for user root
Dec 04 10:48:02 compute-0 sudo[257372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:48:02 compute-0 sudo[257372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:48:03 compute-0 podman[257408]: 2025-12-04 10:48:03.262545719 +0000 UTC m=+0.104018532 container create c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:48:03 compute-0 podman[257408]: 2025-12-04 10:48:03.18552673 +0000 UTC m=+0.026999573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:48:03 compute-0 systemd[1]: Started libpod-conmon-c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14.scope.
Dec 04 10:48:03 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:48:03 compute-0 ceph-mon[75358]: pgmap v1156: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 2 op/s
Dec 04 10:48:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "snap_name": "6d026511-3379-4035-832a-6cafed93d0e8_33df231b-c8c4-45b8-9a3d-95830eea1273", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "snap_name": "6d026511-3379-4035-832a-6cafed93d0e8", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:03 compute-0 podman[257408]: 2025-12-04 10:48:03.34406481 +0000 UTC m=+0.185537633 container init c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:48:03 compute-0 podman[257408]: 2025-12-04 10:48:03.35301783 +0000 UTC m=+0.194490633 container start c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:48:03 compute-0 vigilant_khayyam[257424]: 167 167
Dec 04 10:48:03 compute-0 podman[257408]: 2025-12-04 10:48:03.358498604 +0000 UTC m=+0.199971437 container attach c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:48:03 compute-0 systemd[1]: libpod-c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14.scope: Deactivated successfully.
Dec 04 10:48:03 compute-0 podman[257408]: 2025-12-04 10:48:03.360979545 +0000 UTC m=+0.202452358 container died c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec 04 10:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb53b636944ff3c3a8f4f1e610e5f87327bcb8752f75890f1b09effec486c7fc-merged.mount: Deactivated successfully.
Dec 04 10:48:03 compute-0 podman[257408]: 2025-12-04 10:48:03.406328627 +0000 UTC m=+0.247801440 container remove c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 04 10:48:03 compute-0 systemd[1]: libpod-conmon-c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14.scope: Deactivated successfully.
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s wr, 3 op/s
Dec 04 10:48:03 compute-0 podman[257449]: 2025-12-04 10:48:03.575048746 +0000 UTC m=+0.045080316 container create e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec 04 10:48:03 compute-0 systemd[1]: Started libpod-conmon-e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8.scope.
Dec 04 10:48:03 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:48:03 compute-0 podman[257449]: 2025-12-04 10:48:03.553266283 +0000 UTC m=+0.023297873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:48:03 compute-0 podman[257449]: 2025-12-04 10:48:03.650770765 +0000 UTC m=+0.120802365 container init e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:48:03 compute-0 podman[257449]: 2025-12-04 10:48:03.66480939 +0000 UTC m=+0.134840960 container start e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:48:03 compute-0 podman[257449]: 2025-12-04 10:48:03.668214053 +0000 UTC m=+0.138245653 container attach e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "snap_name": "3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030_17447fd6-7690-4bc1-b036-20af66e1ccf6", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030_17447fd6-7690-4bc1-b036-20af66e1ccf6, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp'
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp' to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta'
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030_17447fd6-7690-4bc1-b036-20af66e1ccf6, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "snap_name": "3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp'
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp' to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta'
Dec 04 10:48:03 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:48:03 compute-0 sshd-session[257328]: Received disconnect from 103.179.218.243 port 43400:11: Bye Bye [preauth]
Dec 04 10:48:03 compute-0 sshd-session[257328]: Disconnected from authenticating user root 103.179.218.243 port 43400 [preauth]
Dec 04 10:48:04 compute-0 lvm[257543]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:48:04 compute-0 lvm[257543]: VG ceph_vg0 finished
Dec 04 10:48:04 compute-0 lvm[257544]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:48:04 compute-0 lvm[257544]: VG ceph_vg1 finished
Dec 04 10:48:04 compute-0 lvm[257546]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:48:04 compute-0 lvm[257546]: VG ceph_vg2 finished
Dec 04 10:48:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:04 compute-0 inspiring_fermat[257465]: {}
Dec 04 10:48:04 compute-0 systemd[1]: libpod-e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8.scope: Deactivated successfully.
Dec 04 10:48:04 compute-0 systemd[1]: libpod-e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8.scope: Consumed 1.365s CPU time.
Dec 04 10:48:04 compute-0 podman[257449]: 2025-12-04 10:48:04.513887331 +0000 UTC m=+0.983918901 container died e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:48:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb-merged.mount: Deactivated successfully.
Dec 04 10:48:04 compute-0 podman[257449]: 2025-12-04 10:48:04.654531021 +0000 UTC m=+1.124562611 container remove e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:48:04 compute-0 systemd[1]: libpod-conmon-e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8.scope: Deactivated successfully.
Dec 04 10:48:04 compute-0 sudo[257372]: pam_unix(sudo:session): session closed for user root
Dec 04 10:48:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:48:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:48:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:48:04 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:48:04 compute-0 sudo[257563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:48:04 compute-0 sudo[257563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:48:04 compute-0 sudo[257563]: pam_unix(sudo:session): session closed for user root
Dec 04 10:48:05 compute-0 ceph-mon[75358]: pgmap v1157: 321 pgs: 321 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s wr, 3 op/s
Dec 04 10:48:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "snap_name": "3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030_17447fd6-7690-4bc1-b036-20af66e1ccf6", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:05 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "snap_name": "3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:48:05 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "format": "json"}]: dispatch
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:48:05 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:48:05.502+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7de2ac86-d29c-49e9-b8b1-f1b9a7934340' of type subvolume
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7de2ac86-d29c-49e9-b8b1-f1b9a7934340' of type subvolume
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340'' moved to trashcan
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec 04 10:48:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s wr, 3 op/s
Dec 04 10:48:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Dec 04 10:48:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Dec 04 10:48:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "59666e24-d766-4aa9-9e78-1be546c42532", "format": "json"}]: dispatch
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:59666e24-d766-4aa9-9e78-1be546c42532, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:59666e24-d766-4aa9-9e78-1be546c42532, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:48:07 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:48:07.136+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '59666e24-d766-4aa9-9e78-1be546c42532' of type subvolume
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '59666e24-d766-4aa9-9e78-1be546c42532' of type subvolume
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532'' moved to trashcan
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec 04 10:48:07 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "format": "json"}]: dispatch
Dec 04 10:48:07 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:07 compute-0 ceph-mon[75358]: pgmap v1158: 321 pgs: 321 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s wr, 3 op/s
Dec 04 10:48:07 compute-0 ceph-mon[75358]: osdmap e159: 3 total, 3 up, 3 in
Dec 04 10:48:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 91 KiB/s wr, 5 op/s
Dec 04 10:48:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "59666e24-d766-4aa9-9e78-1be546c42532", "format": "json"}]: dispatch
Dec 04 10:48:08 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "force": true, "format": "json"}]: dispatch
Dec 04 10:48:09 compute-0 ceph-mon[75358]: pgmap v1160: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 91 KiB/s wr, 5 op/s
Dec 04 10:48:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 91 KiB/s wr, 5 op/s
Dec 04 10:48:10 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:48:10.023 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:48:10 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:48:10.024 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:48:11 compute-0 ceph-mon[75358]: pgmap v1161: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 91 KiB/s wr, 5 op/s
Dec 04 10:48:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:48:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2279036852' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:48:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:48:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2279036852' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:48:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 6 op/s
Dec 04 10:48:11 compute-0 podman[257588]: 2025-12-04 10:48:11.981376851 +0000 UTC m=+0.072185282 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:48:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2279036852' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:48:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2279036852' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:48:13 compute-0 ceph-mon[75358]: pgmap v1162: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 6 op/s
Dec 04 10:48:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 71 KiB/s wr, 6 op/s
Dec 04 10:48:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Dec 04 10:48:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Dec 04 10:48:15 compute-0 ceph-mon[75358]: pgmap v1163: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 71 KiB/s wr, 6 op/s
Dec 04 10:48:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 890 B/s rd, 77 KiB/s wr, 7 op/s
Dec 04 10:48:15 compute-0 sshd-session[257610]: Invalid user cgpexpert from 107.175.213.239 port 35612
Dec 04 10:48:15 compute-0 sshd-session[257610]: Received disconnect from 107.175.213.239 port 35612:11: Bye Bye [preauth]
Dec 04 10:48:15 compute-0 sshd-session[257610]: Disconnected from invalid user cgpexpert 107.175.213.239 port 35612 [preauth]
Dec 04 10:48:16 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Dec 04 10:48:16 compute-0 ceph-mon[75358]: osdmap e160: 3 total, 3 up, 3 in
Dec 04 10:48:17 compute-0 ceph-mon[75358]: pgmap v1165: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 890 B/s rd, 77 KiB/s wr, 7 op/s
Dec 04 10:48:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 25 KiB/s wr, 3 op/s
Dec 04 10:48:19 compute-0 ceph-mon[75358]: pgmap v1166: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 25 KiB/s wr, 3 op/s
Dec 04 10:48:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 25 KiB/s wr, 3 op/s
Dec 04 10:48:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:20 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:48:20.026 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:48:20 compute-0 podman[257613]: 2025-12-04 10:48:20.949158942 +0000 UTC m=+0.057315237 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec 04 10:48:20 compute-0 podman[257612]: 2025-12-04 10:48:20.986090489 +0000 UTC m=+0.094370327 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec 04 10:48:21 compute-0 ceph-mon[75358]: pgmap v1167: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 25 KiB/s wr, 3 op/s
Dec 04 10:48:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 25 KiB/s wr, 2 op/s
Dec 04 10:48:23 compute-0 ceph-mon[75358]: pgmap v1168: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 25 KiB/s wr, 2 op/s
Dec 04 10:48:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s wr, 0 op/s
Dec 04 10:48:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:25 compute-0 ceph-mon[75358]: pgmap v1169: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s wr, 0 op/s
Dec 04 10:48:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s wr, 0 op/s
Dec 04 10:48:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:48:26
Dec 04 10:48:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:48:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:48:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms']
Dec 04 10:48:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:48:27 compute-0 ceph-mon[75358]: pgmap v1170: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s wr, 0 op/s
Dec 04 10:48:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec 04 10:48:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:48:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:48:28 compute-0 ceph-mon[75358]: pgmap v1171: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:48:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:48:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec 04 10:48:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:30 compute-0 ceph-mon[75358]: pgmap v1172: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec 04 10:48:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec 04 10:48:32 compute-0 ceph-mon[75358]: pgmap v1173: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec 04 10:48:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec 04 10:48:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:34 compute-0 ceph-mon[75358]: pgmap v1174: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec 04 10:48:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:36 compute-0 ceph-mon[75358]: pgmap v1175: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660922644713851 of space, bias 1.0, pg target 0.19982767934141552 quantized to 32 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005189120990733053 of space, bias 4.0, pg target 0.6226945188879663 quantized to 16 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:48:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:39 compute-0 ceph-mon[75358]: pgmap v1176: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:42 compute-0 ceph-mon[75358]: pgmap v1177: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:42 compute-0 podman[257657]: 2025-12-04 10:48:42.94335199 +0000 UTC m=+0.055064781 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 04 10:48:43 compute-0 ceph-mon[75358]: pgmap v1178: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:44 compute-0 nova_compute[244644]: 2025-12-04 10:48:44.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:45 compute-0 ceph-mon[75358]: pgmap v1179: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:46 compute-0 nova_compute[244644]: 2025-12-04 10:48:46.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:46 compute-0 nova_compute[244644]: 2025-12-04 10:48:46.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:48:46 compute-0 nova_compute[244644]: 2025-12-04 10:48:46.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:48:46 compute-0 nova_compute[244644]: 2025-12-04 10:48:46.699 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:48:46 compute-0 nova_compute[244644]: 2025-12-04 10:48:46.700 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:46 compute-0 nova_compute[244644]: 2025-12-04 10:48:46.700 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 04 10:48:46 compute-0 nova_compute[244644]: 2025-12-04 10:48:46.723 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 04 10:48:47 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:48:47.186 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:48:47 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:48:47.187 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:48:47 compute-0 ceph-mon[75358]: pgmap v1180: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:47 compute-0 nova_compute[244644]: 2025-12-04 10:48:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:48 compute-0 nova_compute[244644]: 2025-12-04 10:48:48.354 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:49 compute-0 ceph-mon[75358]: pgmap v1181: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:49 compute-0 nova_compute[244644]: 2025-12-04 10:48:49.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:49 compute-0 nova_compute[244644]: 2025-12-04 10:48:49.387 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:48:49 compute-0 nova_compute[244644]: 2025-12-04 10:48:49.388 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:48:49 compute-0 nova_compute[244644]: 2025-12-04 10:48:49.388 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:48:49 compute-0 nova_compute[244644]: 2025-12-04 10:48:49.388 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:48:49 compute-0 nova_compute[244644]: 2025-12-04 10:48:49.389 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:48:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:48:49 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1328841810' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:48:49 compute-0 nova_compute[244644]: 2025-12-04 10:48:49.940 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.099 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.100 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5026MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.100 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.101 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:48:50 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1328841810' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.483 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.483 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.653 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing inventories for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.749 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating ProviderTree inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.749 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.785 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing aggregate associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.810 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing trait associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, traits: COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 04 10:48:50 compute-0 nova_compute[244644]: 2025-12-04 10:48:50.839 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:48:51 compute-0 ceph-mon[75358]: pgmap v1182: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:48:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154574257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:48:51 compute-0 nova_compute[244644]: 2025-12-04 10:48:51.380 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:48:51 compute-0 nova_compute[244644]: 2025-12-04 10:48:51.387 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:48:51 compute-0 nova_compute[244644]: 2025-12-04 10:48:51.407 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:48:51 compute-0 nova_compute[244644]: 2025-12-04 10:48:51.408 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:48:51 compute-0 nova_compute[244644]: 2025-12-04 10:48:51.409 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:48:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:51 compute-0 podman[257722]: 2025-12-04 10:48:51.962873399 +0000 UTC m=+0.063555111 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:48:52 compute-0 podman[257721]: 2025-12-04 10:48:52.001136207 +0000 UTC m=+0.104706670 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 04 10:48:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3154574257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:48:53 compute-0 ceph-mon[75358]: pgmap v1183: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:53 compute-0 nova_compute[244644]: 2025-12-04 10:48:53.409 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:53 compute-0 nova_compute[244644]: 2025-12-04 10:48:53.410 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:53 compute-0 nova_compute[244644]: 2025-12-04 10:48:53.410 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:53 compute-0 nova_compute[244644]: 2025-12-04 10:48:53.411 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:53 compute-0 nova_compute[244644]: 2025-12-04 10:48:53.411 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:48:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:54 compute-0 nova_compute[244644]: 2025-12-04 10:48:54.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:54 compute-0 nova_compute[244644]: 2025-12-04 10:48:54.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:48:54 compute-0 nova_compute[244644]: 2025-12-04 10:48:54.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 04 10:48:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:48:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:48:54 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/3c9b0285-3124-4ba7-b951-215aec98e0e4'.
Dec 04 10:48:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp'
Dec 04 10:48:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp' to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta'
Dec 04 10:48:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:48:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "format": "json"}]: dispatch
Dec 04 10:48:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:48:54 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:48:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:48:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:48:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:48:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:48:54.917 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:48:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:48:54.918 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:48:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:48:54.919 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:48:55 compute-0 ceph-mon[75358]: pgmap v1184: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:48:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:56 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:48:56.189 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:48:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:48:56 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "format": "json"}]: dispatch
Dec 04 10:48:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec 04 10:48:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:48:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:48:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "snap_name": "c8af7113-93d2-4d4c-9380-c06be20483a6", "format": "json"}]: dispatch
Dec 04 10:48:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:48:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:48:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:48:58 compute-0 ceph-mon[75358]: pgmap v1185: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:48:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:48:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:48:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:48:59 compute-0 ceph-mon[75358]: pgmap v1186: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec 04 10:48:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "snap_name": "c8af7113-93d2-4d4c-9380-c06be20483a6", "format": "json"}]: dispatch
Dec 04 10:48:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec 04 10:48:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:00 compute-0 ceph-mon[75358]: pgmap v1187: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "snap_name": "c8af7113-93d2-4d4c-9380-c06be20483a6_e1626f5c-e61a-4e5c-8eae-8ed43c8ee857", "force": true, "format": "json"}]: dispatch
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6_e1626f5c-e61a-4e5c-8eae-8ed43c8ee857, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp'
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp' to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta'
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6_e1626f5c-e61a-4e5c-8eae-8ed43c8ee857, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "snap_name": "c8af7113-93d2-4d4c-9380-c06be20483a6", "force": true, "format": "json"}]: dispatch
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp'
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp' to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta'
Dec 04 10:49:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:49:02 compute-0 ceph-mon[75358]: pgmap v1188: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec 04 10:49:02 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "snap_name": "c8af7113-93d2-4d4c-9380-c06be20483a6_e1626f5c-e61a-4e5c-8eae-8ed43c8ee857", "force": true, "format": "json"}]: dispatch
Dec 04 10:49:02 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "snap_name": "c8af7113-93d2-4d4c-9380-c06be20483a6", "force": true, "format": "json"}]: dispatch
Dec 04 10:49:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s wr, 2 op/s
Dec 04 10:49:04 compute-0 ceph-mon[75358]: pgmap v1189: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s wr, 2 op/s
Dec 04 10:49:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:04 compute-0 sudo[257767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:49:04 compute-0 sudo[257767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:49:04 compute-0 sudo[257767]: pam_unix(sudo:session): session closed for user root
Dec 04 10:49:04 compute-0 sudo[257792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:49:04 compute-0 sudo[257792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "format": "json"}]: dispatch
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:49:05 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:05.371+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1db2d22c-803f-4ebe-b241-8ba03a81e7dc' of type subvolume
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1db2d22c-803f-4ebe-b241-8ba03a81e7dc' of type subvolume
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "force": true, "format": "json"}]: dispatch
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc'' moved to trashcan
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec 04 10:49:05 compute-0 sudo[257792]: pam_unix(sudo:session): session closed for user root
Dec 04 10:49:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s wr, 2 op/s
Dec 04 10:49:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:49:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:49:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:49:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:49:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:49:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:49:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:49:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:49:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:49:05 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:49:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:49:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:49:05 compute-0 sudo[257849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:49:05 compute-0 sudo[257849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:49:05 compute-0 sudo[257849]: pam_unix(sudo:session): session closed for user root
Dec 04 10:49:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Dec 04 10:49:05 compute-0 sudo[257874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:49:05 compute-0 sudo[257874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:49:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Dec 04 10:49:06 compute-0 podman[257911]: 2025-12-04 10:49:06.022935587 +0000 UTC m=+0.023895088 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:49:06 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Dec 04 10:49:06 compute-0 podman[257911]: 2025-12-04 10:49:06.518873725 +0000 UTC m=+0.519833206 container create 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:49:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "format": "json"}]: dispatch
Dec 04 10:49:06 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "force": true, "format": "json"}]: dispatch
Dec 04 10:49:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:49:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:49:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:49:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:49:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:49:06 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:49:06 compute-0 systemd[1]: Started libpod-conmon-8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06.scope.
Dec 04 10:49:06 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:49:06 compute-0 podman[257911]: 2025-12-04 10:49:06.713452709 +0000 UTC m=+0.714412210 container init 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:49:06 compute-0 podman[257911]: 2025-12-04 10:49:06.722984453 +0000 UTC m=+0.723943934 container start 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 10:49:06 compute-0 podman[257911]: 2025-12-04 10:49:06.726943789 +0000 UTC m=+0.727903300 container attach 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:49:06 compute-0 adoring_booth[257927]: 167 167
Dec 04 10:49:06 compute-0 systemd[1]: libpod-8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06.scope: Deactivated successfully.
Dec 04 10:49:06 compute-0 podman[257911]: 2025-12-04 10:49:06.732493625 +0000 UTC m=+0.733453116 container died 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:49:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-085a80b3b9e43448c1521c88b2d0675c8fd7670f89fd20db187915f7a7ef0dab-merged.mount: Deactivated successfully.
Dec 04 10:49:06 compute-0 podman[257911]: 2025-12-04 10:49:06.783082147 +0000 UTC m=+0.784041628 container remove 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:49:06 compute-0 systemd[1]: libpod-conmon-8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06.scope: Deactivated successfully.
Dec 04 10:49:06 compute-0 podman[257952]: 2025-12-04 10:49:06.963573835 +0000 UTC m=+0.050425949 container create 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:49:07 compute-0 systemd[1]: Started libpod-conmon-8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565.scope.
Dec 04 10:49:07 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:07 compute-0 podman[257952]: 2025-12-04 10:49:06.945184584 +0000 UTC m=+0.032036718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:07 compute-0 podman[257952]: 2025-12-04 10:49:07.178078218 +0000 UTC m=+0.264930332 container init 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:49:07 compute-0 podman[257952]: 2025-12-04 10:49:07.184255769 +0000 UTC m=+0.271107883 container start 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:49:07 compute-0 podman[257952]: 2025-12-04 10:49:07.412838947 +0000 UTC m=+0.499691061 container attach 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:49:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 60 KiB/s wr, 3 op/s
Dec 04 10:49:07 compute-0 bold_mestorf[257969]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:49:07 compute-0 bold_mestorf[257969]: --> All data devices are unavailable
Dec 04 10:49:07 compute-0 systemd[1]: libpod-8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565.scope: Deactivated successfully.
Dec 04 10:49:07 compute-0 podman[257952]: 2025-12-04 10:49:07.664731197 +0000 UTC m=+0.751583311 container died 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:49:07 compute-0 ceph-mon[75358]: pgmap v1190: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s wr, 2 op/s
Dec 04 10:49:07 compute-0 ceph-mon[75358]: osdmap e161: 3 total, 3 up, 3 in
Dec 04 10:49:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff-merged.mount: Deactivated successfully.
Dec 04 10:49:07 compute-0 podman[257952]: 2025-12-04 10:49:07.957974312 +0000 UTC m=+1.044826416 container remove 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:49:07 compute-0 systemd[1]: libpod-conmon-8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565.scope: Deactivated successfully.
Dec 04 10:49:08 compute-0 sudo[257874]: pam_unix(sudo:session): session closed for user root
Dec 04 10:49:08 compute-0 sudo[258001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:49:08 compute-0 sudo[258001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:49:08 compute-0 sudo[258001]: pam_unix(sudo:session): session closed for user root
Dec 04 10:49:08 compute-0 sudo[258026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:49:08 compute-0 sudo[258026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:49:08 compute-0 podman[258062]: 2025-12-04 10:49:08.402823106 +0000 UTC m=+0.041895679 container create 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 04 10:49:08 compute-0 systemd[1]: Started libpod-conmon-9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335.scope.
Dec 04 10:49:08 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:49:08 compute-0 podman[258062]: 2025-12-04 10:49:08.383915593 +0000 UTC m=+0.022988196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:49:08 compute-0 podman[258062]: 2025-12-04 10:49:08.485135196 +0000 UTC m=+0.124207769 container init 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:49:08 compute-0 podman[258062]: 2025-12-04 10:49:08.491758488 +0000 UTC m=+0.130831061 container start 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 04 10:49:08 compute-0 podman[258062]: 2025-12-04 10:49:08.495379067 +0000 UTC m=+0.134451640 container attach 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:49:08 compute-0 loving_cerf[258080]: 167 167
Dec 04 10:49:08 compute-0 systemd[1]: libpod-9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335.scope: Deactivated successfully.
Dec 04 10:49:08 compute-0 podman[258062]: 2025-12-04 10:49:08.497599702 +0000 UTC m=+0.136672275 container died 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:49:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e894b3b78cfd64c477fe0049adf76dceda3722fceb97511d8ee60944f6dbb8f-merged.mount: Deactivated successfully.
Dec 04 10:49:08 compute-0 podman[258062]: 2025-12-04 10:49:08.534077756 +0000 UTC m=+0.173150359 container remove 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:49:08 compute-0 systemd[1]: libpod-conmon-9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335.scope: Deactivated successfully.
Dec 04 10:49:08 compute-0 podman[258104]: 2025-12-04 10:49:08.703842261 +0000 UTC m=+0.043897677 container create 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 04 10:49:08 compute-0 systemd[1]: Started libpod-conmon-422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc.scope.
Dec 04 10:49:08 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:08 compute-0 podman[258104]: 2025-12-04 10:49:08.685413969 +0000 UTC m=+0.025469295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:49:08 compute-0 podman[258104]: 2025-12-04 10:49:08.791975484 +0000 UTC m=+0.132030810 container init 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:49:08 compute-0 podman[258104]: 2025-12-04 10:49:08.799205711 +0000 UTC m=+0.139261017 container start 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:49:08 compute-0 podman[258104]: 2025-12-04 10:49:08.802345009 +0000 UTC m=+0.142400345 container attach 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 04 10:49:08 compute-0 ceph-mon[75358]: pgmap v1192: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 60 KiB/s wr, 3 op/s
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7384c38f-046a-4732-911b-7fca953ef69a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a/16e840ce-ed12-467f-88c3-048d9d944422'.
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a/.meta.tmp'
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a/.meta.tmp' to config b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a/.meta'
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7384c38f-046a-4732-911b-7fca953ef69a", "format": "json"}]: dispatch
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]: {
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:     "0": [
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:         {
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "devices": [
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "/dev/loop3"
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             ],
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_name": "ceph_lv0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_size": "21470642176",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "name": "ceph_lv0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "tags": {
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.cluster_name": "ceph",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.crush_device_class": "",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.encrypted": "0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.objectstore": "bluestore",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.osd_id": "0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.type": "block",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.vdo": "0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.with_tpm": "0"
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             },
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "type": "block",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "vg_name": "ceph_vg0"
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:         }
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:     ],
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:     "1": [
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:         {
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "devices": [
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "/dev/loop4"
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             ],
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_name": "ceph_lv1",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_size": "21470642176",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "name": "ceph_lv1",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "tags": {
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.cluster_name": "ceph",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.crush_device_class": "",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.encrypted": "0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.objectstore": "bluestore",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.osd_id": "1",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.type": "block",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.vdo": "0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.with_tpm": "0"
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             },
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "type": "block",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "vg_name": "ceph_vg1"
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:         }
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:     ],
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:     "2": [
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:         {
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "devices": [
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "/dev/loop5"
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             ],
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_name": "ceph_lv2",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_size": "21470642176",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "name": "ceph_lv2",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "tags": {
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.cluster_name": "ceph",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.crush_device_class": "",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.encrypted": "0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.objectstore": "bluestore",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.osd_id": "2",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.type": "block",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.vdo": "0",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:                 "ceph.with_tpm": "0"
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             },
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "type": "block",
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:             "vg_name": "ceph_vg2"
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:         }
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]:     ]
Dec 04 10:49:09 compute-0 heuristic_hopper[258121]: }
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec 04 10:49:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:49:09 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:49:09 compute-0 systemd[1]: libpod-422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc.scope: Deactivated successfully.
Dec 04 10:49:09 compute-0 podman[258104]: 2025-12-04 10:49:09.09546367 +0000 UTC m=+0.435518976 container died 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 04 10:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb-merged.mount: Deactivated successfully.
Dec 04 10:49:09 compute-0 podman[258104]: 2025-12-04 10:49:09.139446499 +0000 UTC m=+0.479501815 container remove 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 04 10:49:09 compute-0 systemd[1]: libpod-conmon-422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc.scope: Deactivated successfully.
Dec 04 10:49:09 compute-0 sudo[258026]: pam_unix(sudo:session): session closed for user root
Dec 04 10:49:09 compute-0 sudo[258142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:49:09 compute-0 sudo[258142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:49:09 compute-0 sudo[258142]: pam_unix(sudo:session): session closed for user root
Dec 04 10:49:09 compute-0 sudo[258167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:49:09 compute-0 sudo[258167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:49:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 60 KiB/s wr, 3 op/s
Dec 04 10:49:09 compute-0 podman[258204]: 2025-12-04 10:49:09.588825204 +0000 UTC m=+0.044671876 container create ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:49:09 compute-0 systemd[1]: Started libpod-conmon-ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62.scope.
Dec 04 10:49:09 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:49:09 compute-0 podman[258204]: 2025-12-04 10:49:09.568859144 +0000 UTC m=+0.024705826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:49:09 compute-0 podman[258204]: 2025-12-04 10:49:09.67465493 +0000 UTC m=+0.130501602 container init ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 04 10:49:09 compute-0 podman[258204]: 2025-12-04 10:49:09.682454811 +0000 UTC m=+0.138301463 container start ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:49:09 compute-0 podman[258204]: 2025-12-04 10:49:09.685394453 +0000 UTC m=+0.141241135 container attach ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:49:09 compute-0 angry_khorana[258220]: 167 167
Dec 04 10:49:09 compute-0 systemd[1]: libpod-ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62.scope: Deactivated successfully.
Dec 04 10:49:09 compute-0 conmon[258220]: conmon ea04c84d09c1be28b036 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62.scope/container/memory.events
Dec 04 10:49:09 compute-0 podman[258204]: 2025-12-04 10:49:09.690862317 +0000 UTC m=+0.146708969 container died ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 04 10:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6a80d9091f3f8819094a02fbf5383d4a3ac1c5a9f310681c55bb472fe4a86c6-merged.mount: Deactivated successfully.
Dec 04 10:49:09 compute-0 podman[258204]: 2025-12-04 10:49:09.725285282 +0000 UTC m=+0.181131934 container remove ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:49:09 compute-0 systemd[1]: libpod-conmon-ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62.scope: Deactivated successfully.
Dec 04 10:49:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:09 compute-0 podman[258245]: 2025-12-04 10:49:09.878082551 +0000 UTC m=+0.041945351 container create 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:49:09 compute-0 systemd[1]: Started libpod-conmon-2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c.scope.
Dec 04 10:49:09 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7384c38f-046a-4732-911b-7fca953ef69a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:49:09 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7384c38f-046a-4732-911b-7fca953ef69a", "format": "json"}]: dispatch
Dec 04 10:49:09 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:49:09 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:49:09 compute-0 podman[258245]: 2025-12-04 10:49:09.947954615 +0000 UTC m=+0.111817425 container init 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:49:09 compute-0 podman[258245]: 2025-12-04 10:49:09.859162807 +0000 UTC m=+0.023025637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:49:09 compute-0 podman[258245]: 2025-12-04 10:49:09.955423248 +0000 UTC m=+0.119286058 container start 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Dec 04 10:49:09 compute-0 podman[258245]: 2025-12-04 10:49:09.958886943 +0000 UTC m=+0.122749753 container attach 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:49:10 compute-0 lvm[258342]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:49:10 compute-0 lvm[258342]: VG ceph_vg1 finished
Dec 04 10:49:10 compute-0 lvm[258341]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:49:10 compute-0 lvm[258341]: VG ceph_vg0 finished
Dec 04 10:49:10 compute-0 lvm[258344]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:49:10 compute-0 lvm[258344]: VG ceph_vg2 finished
Dec 04 10:49:10 compute-0 cranky_khayyam[258262]: {}
Dec 04 10:49:10 compute-0 podman[258245]: 2025-12-04 10:49:10.808004735 +0000 UTC m=+0.971867535 container died 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:49:10 compute-0 systemd[1]: libpod-2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c.scope: Deactivated successfully.
Dec 04 10:49:10 compute-0 systemd[1]: libpod-2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c.scope: Consumed 1.341s CPU time.
Dec 04 10:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f-merged.mount: Deactivated successfully.
Dec 04 10:49:10 compute-0 ceph-mon[75358]: pgmap v1193: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 60 KiB/s wr, 3 op/s
Dec 04 10:49:10 compute-0 podman[258245]: 2025-12-04 10:49:10.939262225 +0000 UTC m=+1.103125035 container remove 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 04 10:49:10 compute-0 systemd[1]: libpod-conmon-2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c.scope: Deactivated successfully.
Dec 04 10:49:10 compute-0 sudo[258167]: pam_unix(sudo:session): session closed for user root
Dec 04 10:49:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:49:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:49:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:49:11 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:49:11 compute-0 sudo[258361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:49:11 compute-0 sudo[258361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:49:11 compute-0 sudo[258361]: pam_unix(sudo:session): session closed for user root
Dec 04 10:49:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:49:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1229014359' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:49:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:49:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1229014359' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:49:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 60 KiB/s wr, 4 op/s
Dec 04 10:49:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:49:12 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:49:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1229014359' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:49:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1229014359' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:49:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7384c38f-046a-4732-911b-7fca953ef69a", "format": "json"}]: dispatch
Dec 04 10:49:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7384c38f-046a-4732-911b-7fca953ef69a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:49:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7384c38f-046a-4732-911b-7fca953ef69a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:49:12 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7384c38f-046a-4732-911b-7fca953ef69a' of type subvolume
Dec 04 10:49:12 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:12.606+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7384c38f-046a-4732-911b-7fca953ef69a' of type subvolume
Dec 04 10:49:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7384c38f-046a-4732-911b-7fca953ef69a", "force": true, "format": "json"}]: dispatch
Dec 04 10:49:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec 04 10:49:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a'' moved to trashcan
Dec 04 10:49:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:49:12 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec 04 10:49:13 compute-0 ceph-mon[75358]: pgmap v1194: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 60 KiB/s wr, 4 op/s
Dec 04 10:49:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 4 op/s
Dec 04 10:49:13 compute-0 podman[258386]: 2025-12-04 10:49:13.965631374 +0000 UTC m=+0.066056621 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 04 10:49:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7384c38f-046a-4732-911b-7fca953ef69a", "format": "json"}]: dispatch
Dec 04 10:49:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7384c38f-046a-4732-911b-7fca953ef69a", "force": true, "format": "json"}]: dispatch
Dec 04 10:49:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Dec 04 10:49:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Dec 04 10:49:15 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Dec 04 10:49:15 compute-0 ceph-mon[75358]: pgmap v1195: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 4 op/s
Dec 04 10:49:15 compute-0 ceph-mon[75358]: osdmap e162: 3 total, 3 up, 3 in
Dec 04 10:49:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 433 B/s rd, 62 KiB/s wr, 5 op/s
Dec 04 10:49:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:49:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:49:16 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/cb682cf0-c1e6-441d-935e-9c8f78e43725'.
Dec 04 10:49:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec 04 10:49:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec 04 10:49:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:49:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "format": "json"}]: dispatch
Dec 04 10:49:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:49:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:49:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:49:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:49:17 compute-0 ceph-mon[75358]: pgmap v1197: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 433 B/s rd, 62 KiB/s wr, 5 op/s
Dec 04 10:49:17 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:49:17 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "format": "json"}]: dispatch
Dec 04 10:49:17 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:49:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 63 KiB/s wr, 5 op/s
Dec 04 10:49:19 compute-0 ceph-mon[75358]: pgmap v1198: 321 pgs: 321 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 63 KiB/s wr, 5 op/s
Dec 04 10:49:19 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce", "format": "json"}]: dispatch
Dec 04 10:49:19 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:49:19 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:49:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 63 KiB/s wr, 5 op/s
Dec 04 10:49:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:20 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce", "format": "json"}]: dispatch
Dec 04 10:49:20 compute-0 ceph-mon[75358]: pgmap v1199: 321 pgs: 321 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 63 KiB/s wr, 5 op/s
Dec 04 10:49:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 63 KiB/s wr, 5 op/s
Dec 04 10:49:22 compute-0 ceph-mon[75358]: pgmap v1200: 321 pgs: 321 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 63 KiB/s wr, 5 op/s
Dec 04 10:49:22 compute-0 podman[258408]: 2025-12-04 10:49:22.941981724 +0000 UTC m=+0.049255689 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:49:22 compute-0 podman[258407]: 2025-12-04 10:49:22.973439126 +0000 UTC m=+0.080810694 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce", "target_sub_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, target_sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/423a692f-d7d1-49c7-ba07-ce101229d3f2'.
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp'
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp' to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta'
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] tracking-id cb274a61-bac4-4985-8c30-8cdb47d7bbd3 for path b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9'
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, target_sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 98772187-8e17-49bc-bf03-9548a140f0f9)
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 98772187-8e17-49bc-bf03-9548a140f0f9) -- by 0 seconds
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp'
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp' to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta'
Dec 04 10:49:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Dec 04 10:49:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce", "target_sub_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:49:23 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:49:23 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.iwufnj(active, since 34m)
Dec 04 10:49:24 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.snap/24077abd-b36a-49fd-87f6-98a6b2f3bbce/cb682cf0-c1e6-441d-935e-9c8f78e43725' to b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/423a692f-d7d1-49c7-ba07-ce101229d3f2'
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp'
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp' to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta'
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] untracking cb274a61-bac4-4985-8c30-8cdb47d7bbd3
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp'
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp' to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta'
Dec 04 10:49:24 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 98772187-8e17-49bc-bf03-9548a140f0f9)
Dec 04 10:49:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Dec 04 10:49:25 compute-0 ceph-mgr[75651]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Dec 04 10:49:25 compute-0 ceph-mgr[75651]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Dec 04 10:49:25 compute-0 ceph-mgr[75651]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Dec 04 10:49:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Dec 04 10:49:25 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f8435ce5760>
Dec 04 10:49:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 194 B/s rd, 51 KiB/s wr, 3 op/s
Dec 04 10:49:25 compute-0 ceph-mon[75358]: pgmap v1201: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Dec 04 10:49:25 compute-0 ceph-mon[75358]: mgrmap e16: compute-0.iwufnj(active, since 34m)
Dec 04 10:49:25 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.iwufnj(active, since 34m)
Dec 04 10:49:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:49:26
Dec 04 10:49:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:49:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:49:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'volumes', 'images']
Dec 04 10:49:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:49:27 compute-0 ceph-mon[75358]: pgmap v1202: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 194 B/s rd, 51 KiB/s wr, 3 op/s
Dec 04 10:49:27 compute-0 ceph-mon[75358]: mgrmap e17: compute-0.iwufnj(active, since 34m)
Dec 04 10:49:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 80 KiB/s wr, 7 op/s
Dec 04 10:49:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:49:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:49:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:49:29 compute-0 ceph-mon[75358]: pgmap v1203: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 80 KiB/s wr, 7 op/s
Dec 04 10:49:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 5 op/s
Dec 04 10:49:29 compute-0 ceph-mgr[75651]: [progress INFO root] Writing back 18 completed events
Dec 04 10:49:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 04 10:49:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:49:30 compute-0 ceph-mon[75358]: pgmap v1204: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 5 op/s
Dec 04 10:49:30 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:49:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 6 op/s
Dec 04 10:49:33 compute-0 ceph-mon[75358]: pgmap v1205: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 6 op/s
Dec 04 10:49:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 67 KiB/s wr, 6 op/s
Dec 04 10:49:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:35 compute-0 ceph-mon[75358]: pgmap v1206: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 67 KiB/s wr, 6 op/s
Dec 04 10:49:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 45 KiB/s wr, 5 op/s
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660926060230384 of space, bias 1.0, pg target 0.19982778180691152 quantized to 32 (current 32)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005518787578034542 of space, bias 4.0, pg target 0.6622545093641451 quantized to 16 (current 32)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:49:37 compute-0 ceph-mon[75358]: pgmap v1207: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 45 KiB/s wr, 5 op/s
Dec 04 10:49:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 45 KiB/s wr, 5 op/s
Dec 04 10:49:39 compute-0 ceph-mon[75358]: pgmap v1208: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 45 KiB/s wr, 5 op/s
Dec 04 10:49:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 9.7 KiB/s wr, 1 op/s
Dec 04 10:49:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:41 compute-0 ceph-mon[75358]: pgmap v1209: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 9.7 KiB/s wr, 1 op/s
Dec 04 10:49:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 9.7 KiB/s wr, 1 op/s
Dec 04 10:49:42 compute-0 ceph-mon[75358]: pgmap v1210: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 9.7 KiB/s wr, 1 op/s
Dec 04 10:49:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 9.5 KiB/s wr, 0 op/s
Dec 04 10:49:44 compute-0 nova_compute[244644]: 2025-12-04 10:49:44.362 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:44 compute-0 podman[258488]: 2025-12-04 10:49:44.948428405 +0000 UTC m=+0.053263182 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 04 10:49:45 compute-0 ceph-mon[75358]: pgmap v1211: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 9.5 KiB/s wr, 0 op/s
Dec 04 10:49:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:47 compute-0 ceph-mon[75358]: pgmap v1212: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:47 compute-0 nova_compute[244644]: 2025-12-04 10:49:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:47 compute-0 nova_compute[244644]: 2025-12-04 10:49:47.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:49:47 compute-0 nova_compute[244644]: 2025-12-04 10:49:47.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:49:47 compute-0 nova_compute[244644]: 2025-12-04 10:49:47.356 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:49:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:47 compute-0 sshd-session[258510]: Invalid user admin123 from 101.47.163.20 port 46406
Dec 04 10:49:48 compute-0 sshd-session[258510]: Received disconnect from 101.47.163.20 port 46406:11: Bye Bye [preauth]
Dec 04 10:49:48 compute-0 sshd-session[258510]: Disconnected from invalid user admin123 101.47.163.20 port 46406 [preauth]
Dec 04 10:49:48 compute-0 nova_compute[244644]: 2025-12-04 10:49:48.351 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:49 compute-0 ceph-mon[75358]: pgmap v1213: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:49 compute-0 nova_compute[244644]: 2025-12-04 10:49:49.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:50 compute-0 nova_compute[244644]: 2025-12-04 10:49:50.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:50 compute-0 nova_compute[244644]: 2025-12-04 10:49:50.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:49:50 compute-0 nova_compute[244644]: 2025-12-04 10:49:50.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:49:50 compute-0 nova_compute[244644]: 2025-12-04 10:49:50.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:49:50 compute-0 nova_compute[244644]: 2025-12-04 10:49:50.362 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:49:50 compute-0 nova_compute[244644]: 2025-12-04 10:49:50.362 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:49:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:49:50 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131830048' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:49:50 compute-0 nova_compute[244644]: 2025-12-04 10:49:50.902 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.052 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.054 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5005MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.054 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.054 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.117 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.118 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.139 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:49:51 compute-0 ceph-mon[75358]: pgmap v1214: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:51 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3131830048' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:49:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:49:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368910182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.682 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.689 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.703 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.705 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:49:51 compute-0 nova_compute[244644]: 2025-12-04 10:49:51.705 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:49:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3368910182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:49:53 compute-0 ceph-mon[75358]: pgmap v1215: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:53 compute-0 nova_compute[244644]: 2025-12-04 10:49:53.706 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:53 compute-0 nova_compute[244644]: 2025-12-04 10:49:53.706 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:53 compute-0 nova_compute[244644]: 2025-12-04 10:49:53.706 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:53 compute-0 nova_compute[244644]: 2025-12-04 10:49:53.707 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:49:53 compute-0 podman[258557]: 2025-12-04 10:49:53.968372472 +0000 UTC m=+0.069156713 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 04 10:49:54 compute-0 podman[258556]: 2025-12-04 10:49:54.002806109 +0000 UTC m=+0.106434691 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:49:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:49:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:49:54.918 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:49:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:49:54.919 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:49:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:49:54.919 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:49:55 compute-0 ceph-mon[75358]: pgmap v1216: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:55 compute-0 nova_compute[244644]: 2025-12-04 10:49:55.334 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:55 compute-0 nova_compute[244644]: 2025-12-04 10:49:55.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:49:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:57 compute-0 ceph-mon[75358]: pgmap v1217: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:49:57 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:49:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:49:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:49:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:49:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:49:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:49:59 compute-0 ceph-mon[75358]: pgmap v1218: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:04 compute-0 ceph-mon[75358]: pgmap v1219: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:05 compute-0 ceph-mon[75358]: pgmap v1220: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:05 compute-0 ceph-mon[75358]: pgmap v1221: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:07 compute-0 ceph-mon[75358]: pgmap v1222: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:07 compute-0 sshd-session[258600]: Invalid user alex from 107.175.213.239 port 41486
Dec 04 10:50:07 compute-0 sshd-session[258600]: Received disconnect from 107.175.213.239 port 41486:11: Bye Bye [preauth]
Dec 04 10:50:07 compute-0 sshd-session[258600]: Disconnected from invalid user alex 107.175.213.239 port 41486 [preauth]
Dec 04 10:50:09 compute-0 ceph-mon[75358]: pgmap v1223: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:11 compute-0 sudo[258602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:50:11 compute-0 sudo[258602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:11 compute-0 sudo[258602]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:11 compute-0 sudo[258627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 10:50:11 compute-0 sudo[258627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:11 compute-0 ceph-mon[75358]: pgmap v1224: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:50:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2555593512' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:50:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:50:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2555593512' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:50:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:12 compute-0 podman[258698]: 2025-12-04 10:50:12.085836236 +0000 UTC m=+0.369367011 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 04 10:50:12 compute-0 podman[258698]: 2025-12-04 10:50:12.242614384 +0000 UTC m=+0.526145169 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:50:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2555593512' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:50:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2555593512' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:50:13 compute-0 sudo[258627]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:50:13 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:50:13 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:13 compute-0 sudo[258888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:50:13 compute-0 sudo[258888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:13 compute-0 sudo[258888]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:13 compute-0 sudo[258913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:50:13 compute-0 sudo[258913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:13 compute-0 ceph-mon[75358]: pgmap v1225: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:13 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:13 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:13 compute-0 sudo[258913]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:50:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:50:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:50:13 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:50:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:50:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:50:14 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:50:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:50:14 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:50:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:50:14 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:50:14 compute-0 sudo[258969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:50:14 compute-0 sudo[258969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:14 compute-0 sudo[258969]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:14 compute-0 sudo[258994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:50:14 compute-0 sudo[258994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:14 compute-0 ceph-mon[75358]: pgmap v1226: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:50:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:50:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:50:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:50:14 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:50:14 compute-0 podman[259031]: 2025-12-04 10:50:14.795645123 +0000 UTC m=+0.040031387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:50:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:15 compute-0 podman[259031]: 2025-12-04 10:50:15.365487216 +0000 UTC m=+0.609873390 container create f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:50:15 compute-0 systemd[1]: Started libpod-conmon-f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262.scope.
Dec 04 10:50:15 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:50:15 compute-0 podman[259031]: 2025-12-04 10:50:15.454284691 +0000 UTC m=+0.698670885 container init f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:50:15 compute-0 podman[259031]: 2025-12-04 10:50:15.466089782 +0000 UTC m=+0.710475956 container start f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:50:15 compute-0 elastic_hertz[259049]: 167 167
Dec 04 10:50:15 compute-0 systemd[1]: libpod-f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262.scope: Deactivated successfully.
Dec 04 10:50:15 compute-0 podman[259031]: 2025-12-04 10:50:15.481694506 +0000 UTC m=+0.726080710 container attach f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:50:15 compute-0 podman[259031]: 2025-12-04 10:50:15.483007298 +0000 UTC m=+0.727393472 container died f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:50:15 compute-0 podman[259048]: 2025-12-04 10:50:15.483583052 +0000 UTC m=+0.069658324 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec 04 10:50:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-af06cecb1a94850dc47d562b06c197aaff21740f3be592d5758060f1eb4e378a-merged.mount: Deactivated successfully.
Dec 04 10:50:15 compute-0 podman[259031]: 2025-12-04 10:50:15.532123037 +0000 UTC m=+0.776509221 container remove f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:50:15 compute-0 systemd[1]: libpod-conmon-f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262.scope: Deactivated successfully.
Dec 04 10:50:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:15 compute-0 podman[259094]: 2025-12-04 10:50:15.716539266 +0000 UTC m=+0.025875519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:50:15 compute-0 podman[259094]: 2025-12-04 10:50:15.904706346 +0000 UTC m=+0.214042569 container create 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:50:15 compute-0 systemd[1]: Started libpod-conmon-38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2.scope.
Dec 04 10:50:15 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:16 compute-0 podman[259094]: 2025-12-04 10:50:16.095244925 +0000 UTC m=+0.404581248 container init 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:50:16 compute-0 podman[259094]: 2025-12-04 10:50:16.105153289 +0000 UTC m=+0.414489522 container start 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 04 10:50:16 compute-0 podman[259094]: 2025-12-04 10:50:16.109885096 +0000 UTC m=+0.419221379 container attach 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:50:16 compute-0 clever_lamport[259110]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:50:16 compute-0 clever_lamport[259110]: --> All data devices are unavailable
Dec 04 10:50:16 compute-0 systemd[1]: libpod-38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2.scope: Deactivated successfully.
Dec 04 10:50:16 compute-0 podman[259094]: 2025-12-04 10:50:16.670154703 +0000 UTC m=+0.979490976 container died 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:50:17 compute-0 ceph-mon[75358]: pgmap v1227: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701-merged.mount: Deactivated successfully.
Dec 04 10:50:17 compute-0 podman[259094]: 2025-12-04 10:50:17.316544991 +0000 UTC m=+1.625881254 container remove 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 04 10:50:17 compute-0 systemd[1]: libpod-conmon-38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2.scope: Deactivated successfully.
Dec 04 10:50:17 compute-0 sudo[258994]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:17 compute-0 sudo[259144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:50:17 compute-0 sudo[259144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:17 compute-0 sudo[259144]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:17 compute-0 sudo[259169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:50:17 compute-0 sudo[259169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:17 compute-0 podman[259205]: 2025-12-04 10:50:17.852946982 +0000 UTC m=+0.029934288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:50:18 compute-0 podman[259205]: 2025-12-04 10:50:18.038311294 +0000 UTC m=+0.215298500 container create 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:50:18 compute-0 systemd[1]: Started libpod-conmon-5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3.scope.
Dec 04 10:50:18 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:50:18 compute-0 podman[259205]: 2025-12-04 10:50:18.154041102 +0000 UTC m=+0.331028348 container init 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:50:18 compute-0 podman[259205]: 2025-12-04 10:50:18.165678768 +0000 UTC m=+0.342666014 container start 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:50:18 compute-0 podman[259205]: 2025-12-04 10:50:18.171119082 +0000 UTC m=+0.348106388 container attach 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:50:18 compute-0 systemd[1]: libpod-5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3.scope: Deactivated successfully.
Dec 04 10:50:18 compute-0 xenodochial_cannon[259222]: 167 167
Dec 04 10:50:18 compute-0 conmon[259222]: conmon 5fa059524e5e668a5d00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3.scope/container/memory.events
Dec 04 10:50:18 compute-0 podman[259205]: 2025-12-04 10:50:18.174875945 +0000 UTC m=+0.351863781 container died 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 04 10:50:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbf8146c4ce111efd43bebcfed531a73596b9e9cca077f3fd49c4b10c1f6420e-merged.mount: Deactivated successfully.
Dec 04 10:50:18 compute-0 podman[259205]: 2025-12-04 10:50:18.237002193 +0000 UTC m=+0.413989429 container remove 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:50:18 compute-0 systemd[1]: libpod-conmon-5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3.scope: Deactivated successfully.
Dec 04 10:50:18 compute-0 podman[259245]: 2025-12-04 10:50:18.408310709 +0000 UTC m=+0.026954954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:50:18 compute-0 podman[259245]: 2025-12-04 10:50:18.673170667 +0000 UTC m=+0.291814882 container create 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:50:18 compute-0 systemd[1]: Started libpod-conmon-917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af.scope.
Dec 04 10:50:18 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:18 compute-0 podman[259245]: 2025-12-04 10:50:18.792807542 +0000 UTC m=+0.411451767 container init 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 04 10:50:18 compute-0 podman[259245]: 2025-12-04 10:50:18.807323498 +0000 UTC m=+0.425967703 container start 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:50:18 compute-0 podman[259245]: 2025-12-04 10:50:18.812622519 +0000 UTC m=+0.431266754 container attach 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]: {
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:     "0": [
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:         {
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "devices": [
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "/dev/loop3"
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             ],
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_name": "ceph_lv0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_size": "21470642176",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "name": "ceph_lv0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "tags": {
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.cluster_name": "ceph",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.crush_device_class": "",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.encrypted": "0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.objectstore": "bluestore",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.osd_id": "0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.type": "block",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.vdo": "0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.with_tpm": "0"
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             },
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "type": "block",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "vg_name": "ceph_vg0"
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:         }
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:     ],
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:     "1": [
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:         {
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "devices": [
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "/dev/loop4"
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             ],
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_name": "ceph_lv1",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_size": "21470642176",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "name": "ceph_lv1",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "tags": {
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.cluster_name": "ceph",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.crush_device_class": "",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.encrypted": "0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.objectstore": "bluestore",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.osd_id": "1",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.type": "block",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.vdo": "0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.with_tpm": "0"
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             },
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "type": "block",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "vg_name": "ceph_vg1"
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:         }
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:     ],
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:     "2": [
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:         {
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "devices": [
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "/dev/loop5"
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             ],
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_name": "ceph_lv2",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_size": "21470642176",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "name": "ceph_lv2",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "tags": {
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.cluster_name": "ceph",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.crush_device_class": "",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.encrypted": "0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.objectstore": "bluestore",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.osd_id": "2",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.type": "block",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.vdo": "0",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:                 "ceph.with_tpm": "0"
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             },
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "type": "block",
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:             "vg_name": "ceph_vg2"
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:         }
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]:     ]
Dec 04 10:50:19 compute-0 pedantic_perlman[259261]: }
Dec 04 10:50:19 compute-0 systemd[1]: libpod-917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af.scope: Deactivated successfully.
Dec 04 10:50:19 compute-0 podman[259245]: 2025-12-04 10:50:19.139524613 +0000 UTC m=+0.758168808 container died 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 04 10:50:19 compute-0 ceph-mon[75358]: pgmap v1228: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230-merged.mount: Deactivated successfully.
Dec 04 10:50:19 compute-0 podman[259245]: 2025-12-04 10:50:19.60507246 +0000 UTC m=+1.223716665 container remove 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:50:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:19 compute-0 systemd[1]: libpod-conmon-917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af.scope: Deactivated successfully.
Dec 04 10:50:19 compute-0 sudo[259169]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:19 compute-0 sudo[259282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:50:19 compute-0 sudo[259282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:19 compute-0 sudo[259282]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:19 compute-0 sudo[259307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:50:19 compute-0 sudo[259307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:20 compute-0 podman[259345]: 2025-12-04 10:50:20.138870757 +0000 UTC m=+0.082810409 container create 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:50:20 compute-0 podman[259345]: 2025-12-04 10:50:20.084679374 +0000 UTC m=+0.028619126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:50:20 compute-0 systemd[1]: Started libpod-conmon-5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f.scope.
Dec 04 10:50:20 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:50:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:20 compute-0 podman[259345]: 2025-12-04 10:50:20.360198033 +0000 UTC m=+0.304137705 container init 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:50:20 compute-0 podman[259345]: 2025-12-04 10:50:20.368760574 +0000 UTC m=+0.312700226 container start 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:50:20 compute-0 suspicious_brattain[259361]: 167 167
Dec 04 10:50:20 compute-0 systemd[1]: libpod-5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f.scope: Deactivated successfully.
Dec 04 10:50:20 compute-0 podman[259345]: 2025-12-04 10:50:20.381023816 +0000 UTC m=+0.324963488 container attach 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:50:20 compute-0 podman[259345]: 2025-12-04 10:50:20.381520079 +0000 UTC m=+0.325459731 container died 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e228dfa4a1f310fb2fbd7b4a75dfa1743499162b01e4de6c44613f93fe7163a-merged.mount: Deactivated successfully.
Dec 04 10:50:20 compute-0 podman[259345]: 2025-12-04 10:50:20.430252477 +0000 UTC m=+0.374192139 container remove 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:50:20 compute-0 systemd[1]: libpod-conmon-5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f.scope: Deactivated successfully.
Dec 04 10:50:20 compute-0 podman[259387]: 2025-12-04 10:50:20.577442129 +0000 UTC m=+0.025874607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:50:20 compute-0 podman[259387]: 2025-12-04 10:50:20.999439255 +0000 UTC m=+0.447871703 container create 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:50:21 compute-0 systemd[1]: Started libpod-conmon-8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694.scope.
Dec 04 10:50:21 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:50:21 compute-0 podman[259387]: 2025-12-04 10:50:21.097000076 +0000 UTC m=+0.545432544 container init 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:50:21 compute-0 podman[259387]: 2025-12-04 10:50:21.106476559 +0000 UTC m=+0.554908987 container start 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:50:21 compute-0 podman[259387]: 2025-12-04 10:50:21.1109708 +0000 UTC m=+0.559403258 container attach 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:50:21 compute-0 ceph-mon[75358]: pgmap v1229: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:21 compute-0 lvm[259481]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:50:21 compute-0 lvm[259482]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:50:21 compute-0 lvm[259481]: VG ceph_vg0 finished
Dec 04 10:50:21 compute-0 lvm[259482]: VG ceph_vg1 finished
Dec 04 10:50:21 compute-0 lvm[259484]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:50:21 compute-0 lvm[259484]: VG ceph_vg2 finished
Dec 04 10:50:21 compute-0 lvm[259486]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:50:21 compute-0 lvm[259486]: VG ceph_vg2 finished
Dec 04 10:50:21 compute-0 competent_hamilton[259403]: {}
Dec 04 10:50:22 compute-0 systemd[1]: libpod-8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694.scope: Deactivated successfully.
Dec 04 10:50:22 compute-0 systemd[1]: libpod-8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694.scope: Consumed 1.588s CPU time.
Dec 04 10:50:22 compute-0 podman[259387]: 2025-12-04 10:50:22.034062777 +0000 UTC m=+1.482495215 container died 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3-merged.mount: Deactivated successfully.
Dec 04 10:50:22 compute-0 podman[259387]: 2025-12-04 10:50:22.09150806 +0000 UTC m=+1.539940528 container remove 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 04 10:50:22 compute-0 systemd[1]: libpod-conmon-8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694.scope: Deactivated successfully.
Dec 04 10:50:22 compute-0 sudo[259307]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:50:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:50:22 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:22 compute-0 sudo[259498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:50:22 compute-0 sudo[259498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:50:22 compute-0 sudo[259498]: pam_unix(sudo:session): session closed for user root
Dec 04 10:50:22 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:50:22 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:50:23 compute-0 ceph-mon[75358]: pgmap v1230: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:23 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:50:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:24 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:50:25 compute-0 podman[259524]: 2025-12-04 10:50:24.999957356 +0000 UTC m=+0.086699554 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 04 10:50:25 compute-0 podman[259523]: 2025-12-04 10:50:25.058031036 +0000 UTC m=+0.143999295 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 04 10:50:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:25 compute-0 ceph-mon[75358]: pgmap v1231: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:50:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:50:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec 04 10:50:26 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec 04 10:50:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:50:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:50:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:50:26
Dec 04 10:50:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:50:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:50:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta']
Dec 04 10:50:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:50:27 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:50:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec 04 10:50:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:50:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8455621a30>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f84556216a0>)]
Dec 04 10:50:27 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8417ad2160>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f84183c83d0>)]
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:50:28 compute-0 ceph-mon[75358]: pgmap v1232: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:50:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8417ad8eb0>)]
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:50:28 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "force": true, "format": "json"}]: dispatch
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9'' moved to trashcan
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec 04 10:50:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec 04 10:50:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:30 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.iwufnj(active, since 36m)
Dec 04 10:50:31 compute-0 ceph-mon[75358]: pgmap v1233: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec 04 10:50:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce_bdaceb62-36b9-4db0-b251-a4df98a35c4b", "force": true, "format": "json"}]: dispatch
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce_bdaceb62-36b9-4db0-b251-a4df98a35c4b, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce_bdaceb62-36b9-4db0-b251-a4df98a35c4b, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce", "force": true, "format": "json"}]: dispatch
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec 04 10:50:32 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:50:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec 04 10:50:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "force": true, "format": "json"}]: dispatch
Dec 04 10:50:32 compute-0 ceph-mon[75358]: pgmap v1234: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec 04 10:50:32 compute-0 ceph-mon[75358]: mgrmap e18: compute-0.iwufnj(active, since 36m)
Dec 04 10:50:33 compute-0 ceph-mon[75358]: pgmap v1235: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec 04 10:50:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce_bdaceb62-36b9-4db0-b251-a4df98a35c4b", "force": true, "format": "json"}]: dispatch
Dec 04 10:50:33 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce", "force": true, "format": "json"}]: dispatch
Dec 04 10:50:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 30 KiB/s wr, 2 op/s
Dec 04 10:50:35 compute-0 ceph-mon[75358]: pgmap v1236: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 30 KiB/s wr, 2 op/s
Dec 04 10:50:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "382512d2-4ae6-4a25-96be-5898161f749d", "format": "json"}]: dispatch
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:382512d2-4ae6-4a25-96be-5898161f749d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:382512d2-4ae6-4a25-96be-5898161f749d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:50:35 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:35.519+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '382512d2-4ae6-4a25-96be-5898161f749d' of type subvolume
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '382512d2-4ae6-4a25-96be-5898161f749d' of type subvolume
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "force": true, "format": "json"}]: dispatch
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d'' moved to trashcan
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec 04 10:50:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 30 KiB/s wr, 2 op/s
Dec 04 10:50:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Dec 04 10:50:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Dec 04 10:50:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "382512d2-4ae6-4a25-96be-5898161f749d", "format": "json"}]: dispatch
Dec 04 10:50:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "force": true, "format": "json"}]: dispatch
Dec 04 10:50:36 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660926060230384 of space, bias 1.0, pg target 0.19982778180691152 quantized to 32 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005574210699068629 of space, bias 4.0, pg target 0.6689052838882354 quantized to 16 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:50:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s
Dec 04 10:50:37 compute-0 ceph-mon[75358]: pgmap v1237: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 30 KiB/s wr, 2 op/s
Dec 04 10:50:37 compute-0 ceph-mon[75358]: osdmap e163: 3 total, 3 up, 3 in
Dec 04 10:50:38 compute-0 ceph-mon[75358]: pgmap v1239: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s
Dec 04 10:50:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s
Dec 04 10:50:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:40 compute-0 ceph-mon[75358]: pgmap v1240: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s
Dec 04 10:50:41 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 5 op/s
Dec 04 10:50:42 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:50:42.062 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:50:42 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:50:42.063 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:50:43 compute-0 ceph-mon[75358]: pgmap v1241: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 5 op/s
Dec 04 10:50:43 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 42 KiB/s wr, 3 op/s
Dec 04 10:50:44 compute-0 ceph-mon[75358]: pgmap v1242: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 42 KiB/s wr, 3 op/s
Dec 04 10:50:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Dec 04 10:50:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Dec 04 10:50:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Dec 04 10:50:45 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 458 B/s rd, 47 KiB/s wr, 3 op/s
Dec 04 10:50:45 compute-0 podman[259594]: 2025-12-04 10:50:45.945352064 +0000 UTC m=+0.054703627 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 04 10:50:46 compute-0 ceph-mon[75358]: osdmap e164: 3 total, 3 up, 3 in
Dec 04 10:50:46 compute-0 ceph-mon[75358]: pgmap v1244: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 458 B/s rd, 47 KiB/s wr, 3 op/s
Dec 04 10:50:46 compute-0 nova_compute[244644]: 2025-12-04 10:50:46.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:50:47 compute-0 nova_compute[244644]: 2025-12-04 10:50:47.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:50:47 compute-0 nova_compute[244644]: 2025-12-04 10:50:47.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:50:47 compute-0 nova_compute[244644]: 2025-12-04 10:50:47.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:50:47 compute-0 nova_compute[244644]: 2025-12-04 10:50:47.366 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:50:47 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 10 KiB/s wr, 1 op/s
Dec 04 10:50:47 compute-0 ceph-mon[75358]: pgmap v1245: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 10 KiB/s wr, 1 op/s
Dec 04 10:50:48 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:50:48.065 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:50:49 compute-0 nova_compute[244644]: 2025-12-04 10:50:49.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:50:49 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 10 KiB/s wr, 1 op/s
Dec 04 10:50:50 compute-0 ceph-mon[75358]: pgmap v1246: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 10 KiB/s wr, 1 op/s
Dec 04 10:50:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:50 compute-0 nova_compute[244644]: 2025-12-04 10:50:50.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:50:50 compute-0 nova_compute[244644]: 2025-12-04 10:50:50.447 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:50:50 compute-0 nova_compute[244644]: 2025-12-04 10:50:50.447 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:50:50 compute-0 nova_compute[244644]: 2025-12-04 10:50:50.447 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:50:50 compute-0 nova_compute[244644]: 2025-12-04 10:50:50.448 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:50:50 compute-0 nova_compute[244644]: 2025-12-04 10:50:50.448 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:50:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:50:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:50:50 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/67c71b68-b799-452e-b991-191544991adf'.
Dec 04 10:50:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp'
Dec 04 10:50:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp' to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta'
Dec 04 10:50:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:50:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "format": "json"}]: dispatch
Dec 04 10:50:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:50:50 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:50:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:50:50 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:50:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:50:50 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/194033497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.002 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.139 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.141 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5020MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.141 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.141 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:50:51 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:50:51 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "format": "json"}]: dispatch
Dec 04 10:50:51 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:50:51 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/194033497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.226 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.227 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.244 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:50:51 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 9.7 KiB/s wr, 0 op/s
Dec 04 10:50:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:50:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/258707864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.814 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.821 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.840 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.842 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:50:51 compute-0 nova_compute[244644]: 2025-12-04 10:50:51.842 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:50:52 compute-0 ceph-mon[75358]: pgmap v1247: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 9.7 KiB/s wr, 0 op/s
Dec 04 10:50:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/258707864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:50:53 compute-0 sshd-session[259615]: Invalid user admin from 203.123.219.137 port 40851
Dec 04 10:50:53 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Dec 04 10:50:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "snap_name": "2c9f33a3-8987-4579-986d-04d3f23eb0e2", "format": "json"}]: dispatch
Dec 04 10:50:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:50:53 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:50:53 compute-0 nova_compute[244644]: 2025-12-04 10:50:53.843 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:50:53 compute-0 nova_compute[244644]: 2025-12-04 10:50:53.843 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:50:53 compute-0 sshd-session[259615]: Connection closed by invalid user admin 203.123.219.137 port 40851 [preauth]
Dec 04 10:50:54 compute-0 nova_compute[244644]: 2025-12-04 10:50:54.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:50:54 compute-0 ceph-mon[75358]: pgmap v1248: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Dec 04 10:50:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:50:54.919 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:50:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:50:54.920 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:50:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:50:54.920 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:50:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:50:55 compute-0 nova_compute[244644]: 2025-12-04 10:50:55.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:50:55 compute-0 nova_compute[244644]: 2025-12-04 10:50:55.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:50:55 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Dec 04 10:50:55 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "snap_name": "2c9f33a3-8987-4579-986d-04d3f23eb0e2", "format": "json"}]: dispatch
Dec 04 10:50:55 compute-0 podman[259662]: 2025-12-04 10:50:55.944128568 +0000 UTC m=+0.047111540 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:50:55 compute-0 podman[259661]: 2025-12-04 10:50:55.997131983 +0000 UTC m=+0.095785069 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:50:56 compute-0 ceph-mon[75358]: pgmap v1249: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Dec 04 10:50:57 compute-0 nova_compute[244644]: 2025-12-04 10:50:57.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:50:57 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 1 op/s
Dec 04 10:50:57 compute-0 ceph-mon[75358]: pgmap v1250: 321 pgs: 321 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 1 op/s
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04/2f8aed1d-7200-4215-967a-dbcd84383a27'.
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04/.meta.tmp'
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04/.meta.tmp' to config b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04/.meta'
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "format": "json"}]: dispatch
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec 04 10:50:58 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec 04 10:50:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:50:58 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:50:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:50:59 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "format": "json"}]: dispatch
Dec 04 10:50:59 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:50:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:50:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:50:59 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 1 op/s
Dec 04 10:51:00 compute-0 ceph-mon[75358]: pgmap v1251: 321 pgs: 321 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 1 op/s
Dec 04 10:51:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 1 op/s
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "format": "json"}]: dispatch
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:01 compute-0 ceph-mon[75358]: pgmap v1252: 321 pgs: 321 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 1 op/s
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:01 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:01.961+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9dada9dc-6e1e-4a21-96e0-c09b80328b04' of type subvolume
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9dada9dc-6e1e-4a21-96e0-c09b80328b04' of type subvolume
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04'' moved to trashcan
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:51:01 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec 04 10:51:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "format": "json"}]: dispatch
Dec 04 10:51:03 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:03 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s wr, 3 op/s
Dec 04 10:51:04 compute-0 ceph-mon[75358]: pgmap v1253: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s wr, 3 op/s
Dec 04 10:51:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:05 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s wr, 2 op/s
Dec 04 10:51:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:51:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec 04 10:51:06 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c/3a207827-d5fd-419f-acaa-6c76538172dc'.
Dec 04 10:51:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c/.meta.tmp'
Dec 04 10:51:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c/.meta.tmp' to config b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c/.meta'
Dec 04 10:51:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec 04 10:51:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "format": "json"}]: dispatch
Dec 04 10:51:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec 04 10:51:06 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec 04 10:51:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:51:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:51:06 compute-0 ceph-mon[75358]: pgmap v1254: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s wr, 2 op/s
Dec 04 10:51:06 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:51:07 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 73 KiB/s wr, 4 op/s
Dec 04 10:51:07 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:51:07 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "format": "json"}]: dispatch
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.717133) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467717251, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2407, "num_deletes": 506, "total_data_size": 3753390, "memory_usage": 3847280, "flush_reason": "Manual Compaction"}
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467744241, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3419157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26255, "largest_seqno": 28661, "table_properties": {"data_size": 3409047, "index_size": 5900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 25408, "raw_average_key_size": 20, "raw_value_size": 3386233, "raw_average_value_size": 2715, "num_data_blocks": 262, "num_entries": 1247, "num_filter_entries": 1247, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845262, "oldest_key_time": 1764845262, "file_creation_time": 1764845467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 27188 microseconds, and 10275 cpu microseconds.
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.744330) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3419157 bytes OK
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.744366) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.746772) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.746794) EVENT_LOG_v1 {"time_micros": 1764845467746788, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.746815) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3742155, prev total WAL file size 3742155, number of live WAL files 2.
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.748150) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3339KB)], [59(9740KB)]
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467748212, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13393257, "oldest_snapshot_seqno": -1}
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5874 keys, 8850047 bytes, temperature: kUnknown
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467804934, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8850047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8810748, "index_size": 23509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 146717, "raw_average_key_size": 24, "raw_value_size": 8705403, "raw_average_value_size": 1482, "num_data_blocks": 965, "num_entries": 5874, "num_filter_entries": 5874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.805262) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8850047 bytes
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.807214) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.7 rd, 155.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.5 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(6.5) write-amplify(2.6) OK, records in: 6885, records dropped: 1011 output_compression: NoCompression
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.807262) EVENT_LOG_v1 {"time_micros": 1764845467807244, "job": 32, "event": "compaction_finished", "compaction_time_micros": 56815, "compaction_time_cpu_micros": 21528, "output_level": 6, "num_output_files": 1, "total_output_size": 8850047, "num_input_records": 6885, "num_output_records": 5874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467808362, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467810523, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.748065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:51:07 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:51:08 compute-0 ceph-mon[75358]: pgmap v1255: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 73 KiB/s wr, 4 op/s
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "format": "json"}]: dispatch
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:09 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:09.627+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac41ff8b-3e5d-413c-842a-4731aa5fec9c' of type subvolume
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac41ff8b-3e5d-413c-842a-4731aa5fec9c' of type subvolume
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c'' moved to trashcan
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec 04 10:51:09 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 3 op/s
Dec 04 10:51:09 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "format": "json"}]: dispatch
Dec 04 10:51:09 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:09 compute-0 ceph-mon[75358]: pgmap v1256: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 3 op/s
Dec 04 10:51:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:51:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4251661480' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:51:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:51:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4251661480' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:51:11 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 3 op/s
Dec 04 10:51:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/4251661480' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:51:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/4251661480' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:51:12 compute-0 ceph-mon[75358]: pgmap v1257: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 3 op/s
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687/d0d02848-9da1-4624-b31a-63cb7ff261f4'.
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687/.meta.tmp'
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687/.meta.tmp' to config b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687/.meta'
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "format": "json"}]: dispatch
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec 04 10:51:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:51:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:51:13 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 5 op/s
Dec 04 10:51:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:51:14 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "format": "json"}]: dispatch
Dec 04 10:51:14 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:51:14 compute-0 ceph-mon[75358]: pgmap v1258: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 5 op/s
Dec 04 10:51:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:15 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 44 KiB/s wr, 3 op/s
Dec 04 10:51:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "format": "json"}]: dispatch
Dec 04 10:51:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:16 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:16.397+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a25a2b9-950c-410a-9ad7-d3f8bbfb3687' of type subvolume
Dec 04 10:51:16 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a25a2b9-950c-410a-9ad7-d3f8bbfb3687' of type subvolume
Dec 04 10:51:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec 04 10:51:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687'' moved to trashcan
Dec 04 10:51:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:51:16 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec 04 10:51:16 compute-0 ceph-mon[75358]: pgmap v1259: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 44 KiB/s wr, 3 op/s
Dec 04 10:51:16 compute-0 podman[259706]: 2025-12-04 10:51:16.948327607 +0000 UTC m=+0.056809429 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 04 10:51:17 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 74 KiB/s wr, 5 op/s
Dec 04 10:51:17 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "format": "json"}]: dispatch
Dec 04 10:51:17 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:18 compute-0 ceph-mon[75358]: pgmap v1260: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 74 KiB/s wr, 5 op/s
Dec 04 10:51:19 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 54 KiB/s wr, 3 op/s
Dec 04 10:51:19 compute-0 ceph-mon[75358]: pgmap v1261: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 54 KiB/s wr, 3 op/s
Dec 04 10:51:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "22efd76f-f190-4877-9402-6f240297ffab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:51:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec 04 10:51:20 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab/967bbf13-d2e7-4e83-a18f-dbf7bce7d877'.
Dec 04 10:51:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab/.meta.tmp'
Dec 04 10:51:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab/.meta.tmp' to config b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab/.meta'
Dec 04 10:51:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec 04 10:51:20 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "22efd76f-f190-4877-9402-6f240297ffab", "format": "json"}]: dispatch
Dec 04 10:51:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec 04 10:51:20 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec 04 10:51:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:51:20 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:51:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:20 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "22efd76f-f190-4877-9402-6f240297ffab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:51:20 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "22efd76f-f190-4877-9402-6f240297ffab", "format": "json"}]: dispatch
Dec 04 10:51:20 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:51:21 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 77 KiB/s wr, 4 op/s
Dec 04 10:51:21 compute-0 ceph-mon[75358]: pgmap v1262: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 77 KiB/s wr, 4 op/s
Dec 04 10:51:22 compute-0 sudo[259726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:51:22 compute-0 sudo[259726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:51:22 compute-0 sudo[259726]: pam_unix(sudo:session): session closed for user root
Dec 04 10:51:22 compute-0 sudo[259751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:51:22 compute-0 sudo[259751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:51:23 compute-0 sudo[259751]: pam_unix(sudo:session): session closed for user root
Dec 04 10:51:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:51:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:51:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:51:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:51:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:51:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "22efd76f-f190-4877-9402-6f240297ffab", "format": "json"}]: dispatch
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:22efd76f-f190-4877-9402-6f240297ffab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:22efd76f-f190-4877-9402-6f240297ffab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:23 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:23.493+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '22efd76f-f190-4877-9402-6f240297ffab' of type subvolume
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '22efd76f-f190-4877-9402-6f240297ffab' of type subvolume
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "22efd76f-f190-4877-9402-6f240297ffab", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab'' moved to trashcan
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec 04 10:51:23 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 5 op/s
Dec 04 10:51:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:51:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:51:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:51:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:51:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:51:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:51:23 compute-0 sudo[259807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:51:23 compute-0 sudo[259807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:51:23 compute-0 sudo[259807]: pam_unix(sudo:session): session closed for user root
Dec 04 10:51:24 compute-0 sudo[259832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:51:24 compute-0 sudo[259832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:51:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:51:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:51:24 compute-0 podman[259870]: 2025-12-04 10:51:24.318302317 +0000 UTC m=+0.024634698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:51:24 compute-0 podman[259870]: 2025-12-04 10:51:24.609272837 +0000 UTC m=+0.315605198 container create 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:51:24 compute-0 systemd[1]: Started libpod-conmon-9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb.scope.
Dec 04 10:51:24 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:51:24 compute-0 podman[259870]: 2025-12-04 10:51:24.725800135 +0000 UTC m=+0.432132516 container init 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:51:24 compute-0 podman[259870]: 2025-12-04 10:51:24.73575683 +0000 UTC m=+0.442089191 container start 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:51:24 compute-0 podman[259870]: 2025-12-04 10:51:24.740134077 +0000 UTC m=+0.446466458 container attach 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:51:24 compute-0 recursing_kare[259886]: 167 167
Dec 04 10:51:24 compute-0 systemd[1]: libpod-9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb.scope: Deactivated successfully.
Dec 04 10:51:24 compute-0 podman[259870]: 2025-12-04 10:51:24.744882375 +0000 UTC m=+0.451214736 container died 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6fa28a68fe25802bfd8eccef5798615e60395c4f50fbd0c7a816cf9adc2530b-merged.mount: Deactivated successfully.
Dec 04 10:51:24 compute-0 podman[259870]: 2025-12-04 10:51:24.867497472 +0000 UTC m=+0.573829833 container remove 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 04 10:51:24 compute-0 systemd[1]: libpod-conmon-9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb.scope: Deactivated successfully.
Dec 04 10:51:25 compute-0 podman[259911]: 2025-12-04 10:51:25.135258812 +0000 UTC m=+0.085283430 container create a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:51:25 compute-0 podman[259911]: 2025-12-04 10:51:25.080704459 +0000 UTC m=+0.030729127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:51:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:51:25 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "22efd76f-f190-4877-9402-6f240297ffab", "format": "json"}]: dispatch
Dec 04 10:51:25 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "22efd76f-f190-4877-9402-6f240297ffab", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:25 compute-0 ceph-mon[75358]: pgmap v1263: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 5 op/s
Dec 04 10:51:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:51:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:51:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:51:25 compute-0 systemd[1]: Started libpod-conmon-a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370.scope.
Dec 04 10:51:25 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:25 compute-0 podman[259911]: 2025-12-04 10:51:25.219058184 +0000 UTC m=+0.169082822 container init a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:51:25 compute-0 podman[259911]: 2025-12-04 10:51:25.229540632 +0000 UTC m=+0.179565250 container start a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:51:25 compute-0 podman[259911]: 2025-12-04 10:51:25.249466622 +0000 UTC m=+0.199491250 container attach a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec 04 10:51:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:25 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 3 op/s
Dec 04 10:51:25 compute-0 confident_mayer[259928]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:51:25 compute-0 confident_mayer[259928]: --> All data devices are unavailable
Dec 04 10:51:25 compute-0 systemd[1]: libpod-a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370.scope: Deactivated successfully.
Dec 04 10:51:25 compute-0 podman[259911]: 2025-12-04 10:51:25.73863797 +0000 UTC m=+0.688662598 container died a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 04 10:51:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a-merged.mount: Deactivated successfully.
Dec 04 10:51:25 compute-0 podman[259911]: 2025-12-04 10:51:25.789229876 +0000 UTC m=+0.739254494 container remove a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:51:25 compute-0 systemd[1]: libpod-conmon-a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370.scope: Deactivated successfully.
Dec 04 10:51:25 compute-0 sudo[259832]: pam_unix(sudo:session): session closed for user root
Dec 04 10:51:25 compute-0 sudo[259960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:51:25 compute-0 sudo[259960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:51:25 compute-0 sudo[259960]: pam_unix(sudo:session): session closed for user root
Dec 04 10:51:26 compute-0 sudo[259985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:51:26 compute-0 sudo[259985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:51:26 compute-0 podman[260009]: 2025-12-04 10:51:26.098075686 +0000 UTC m=+0.064776105 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 04 10:51:26 compute-0 podman[260010]: 2025-12-04 10:51:26.12749539 +0000 UTC m=+0.095612104 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 04 10:51:26 compute-0 ceph-mon[75358]: pgmap v1264: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 3 op/s
Dec 04 10:51:26 compute-0 podman[260064]: 2025-12-04 10:51:26.333246833 +0000 UTC m=+0.045721155 container create 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 04 10:51:26 compute-0 systemd[1]: Started libpod-conmon-80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b.scope.
Dec 04 10:51:26 compute-0 podman[260064]: 2025-12-04 10:51:26.313638431 +0000 UTC m=+0.026112763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:51:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:51:26 compute-0 podman[260064]: 2025-12-04 10:51:26.429214946 +0000 UTC m=+0.141689358 container init 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:51:26 compute-0 podman[260064]: 2025-12-04 10:51:26.437659613 +0000 UTC m=+0.150133925 container start 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:51:26 compute-0 podman[260064]: 2025-12-04 10:51:26.443085937 +0000 UTC m=+0.155560279 container attach 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:51:26 compute-0 adoring_goldstine[260080]: 167 167
Dec 04 10:51:26 compute-0 systemd[1]: libpod-80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b.scope: Deactivated successfully.
Dec 04 10:51:26 compute-0 podman[260064]: 2025-12-04 10:51:26.445623849 +0000 UTC m=+0.158098161 container died 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:51:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b4a4ac5bf70162cd9d2e51646acbb27b6447d365463eb8d9c29bb46deceda39-merged.mount: Deactivated successfully.
Dec 04 10:51:26 compute-0 podman[260064]: 2025-12-04 10:51:26.494901182 +0000 UTC m=+0.207375494 container remove 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:51:26 compute-0 systemd[1]: libpod-conmon-80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b.scope: Deactivated successfully.
Dec 04 10:51:26 compute-0 podman[260102]: 2025-12-04 10:51:26.670917813 +0000 UTC m=+0.053437986 container create 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:51:26 compute-0 systemd[1]: Started libpod-conmon-4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980.scope.
Dec 04 10:51:26 compute-0 podman[260102]: 2025-12-04 10:51:26.645568249 +0000 UTC m=+0.028088422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:51:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:51:26
Dec 04 10:51:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:51:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:51:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log', 'backups', 'images', 'vms', '.rgw.root']
Dec 04 10:51:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec 04 10:51:27 compute-0 podman[260102]: 2025-12-04 10:51:27.552734335 +0000 UTC m=+0.935254528 container init 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:51:27 compute-0 podman[260102]: 2025-12-04 10:51:27.568866392 +0000 UTC m=+0.951386545 container start 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:51:27 compute-0 podman[260102]: 2025-12-04 10:51:27.575133166 +0000 UTC m=+0.957653329 container attach 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838/e384d99e-5dac-4b84-8f41-b08a2fb8f434'.
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838/.meta.tmp'
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838/.meta.tmp' to config b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838/.meta'
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "format": "json"}]: dispatch
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec 04 10:51:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 04 10:51:27 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:51:27 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 86 KiB/s wr, 5 op/s
Dec 04 10:51:27 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 04 10:51:27 compute-0 bold_jackson[260119]: {
Dec 04 10:51:27 compute-0 bold_jackson[260119]:     "0": [
Dec 04 10:51:27 compute-0 bold_jackson[260119]:         {
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "devices": [
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "/dev/loop3"
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             ],
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_name": "ceph_lv0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_size": "21470642176",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "name": "ceph_lv0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "tags": {
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.cluster_name": "ceph",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.crush_device_class": "",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.encrypted": "0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.objectstore": "bluestore",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.osd_id": "0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.type": "block",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.vdo": "0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.with_tpm": "0"
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             },
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "type": "block",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "vg_name": "ceph_vg0"
Dec 04 10:51:27 compute-0 bold_jackson[260119]:         }
Dec 04 10:51:27 compute-0 bold_jackson[260119]:     ],
Dec 04 10:51:27 compute-0 bold_jackson[260119]:     "1": [
Dec 04 10:51:27 compute-0 bold_jackson[260119]:         {
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "devices": [
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "/dev/loop4"
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             ],
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_name": "ceph_lv1",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_size": "21470642176",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "name": "ceph_lv1",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "tags": {
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.cluster_name": "ceph",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.crush_device_class": "",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.encrypted": "0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.objectstore": "bluestore",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.osd_id": "1",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.type": "block",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.vdo": "0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.with_tpm": "0"
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             },
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "type": "block",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "vg_name": "ceph_vg1"
Dec 04 10:51:27 compute-0 bold_jackson[260119]:         }
Dec 04 10:51:27 compute-0 bold_jackson[260119]:     ],
Dec 04 10:51:27 compute-0 bold_jackson[260119]:     "2": [
Dec 04 10:51:27 compute-0 bold_jackson[260119]:         {
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "devices": [
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "/dev/loop5"
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             ],
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_name": "ceph_lv2",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_size": "21470642176",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "name": "ceph_lv2",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "tags": {
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.cluster_name": "ceph",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.crush_device_class": "",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.encrypted": "0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.objectstore": "bluestore",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.osd_id": "2",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.type": "block",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.vdo": "0",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:                 "ceph.with_tpm": "0"
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             },
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "type": "block",
Dec 04 10:51:27 compute-0 bold_jackson[260119]:             "vg_name": "ceph_vg2"
Dec 04 10:51:27 compute-0 bold_jackson[260119]:         }
Dec 04 10:51:27 compute-0 bold_jackson[260119]:     ]
Dec 04 10:51:27 compute-0 bold_jackson[260119]: }
Dec 04 10:51:27 compute-0 systemd[1]: libpod-4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980.scope: Deactivated successfully.
Dec 04 10:51:27 compute-0 podman[260102]: 2025-12-04 10:51:27.934800178 +0000 UTC m=+1.317320361 container died 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703-merged.mount: Deactivated successfully.
Dec 04 10:51:27 compute-0 podman[260102]: 2025-12-04 10:51:27.991558325 +0000 UTC m=+1.374078488 container remove 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:51:27 compute-0 systemd[1]: libpod-conmon-4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980.scope: Deactivated successfully.
Dec 04 10:51:28 compute-0 sudo[259985]: pam_unix(sudo:session): session closed for user root
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:51:28 compute-0 sudo[260141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:51:28 compute-0 sudo[260141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:51:28 compute-0 sudo[260141]: pam_unix(sudo:session): session closed for user root
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:51:28 compute-0 sudo[260166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:51:28 compute-0 sudo[260166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:51:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:51:28 compute-0 podman[260203]: 2025-12-04 10:51:28.514798441 +0000 UTC m=+0.040646891 container create 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:51:28 compute-0 systemd[1]: Started libpod-conmon-2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a.scope.
Dec 04 10:51:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:51:28 compute-0 podman[260203]: 2025-12-04 10:51:28.592646057 +0000 UTC m=+0.118494527 container init 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:51:28 compute-0 podman[260203]: 2025-12-04 10:51:28.497613178 +0000 UTC m=+0.023461648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:51:28 compute-0 podman[260203]: 2025-12-04 10:51:28.601073554 +0000 UTC m=+0.126922004 container start 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:51:28 compute-0 podman[260203]: 2025-12-04 10:51:28.605114004 +0000 UTC m=+0.130962474 container attach 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:51:28 compute-0 silly_raman[260220]: 167 167
Dec 04 10:51:28 compute-0 systemd[1]: libpod-2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a.scope: Deactivated successfully.
Dec 04 10:51:28 compute-0 podman[260203]: 2025-12-04 10:51:28.609705307 +0000 UTC m=+0.135553757 container died 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:51:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-93c5362a74b16beb4f814dc2218142bf5455763575880cfbbccac13d92e9118b-merged.mount: Deactivated successfully.
Dec 04 10:51:28 compute-0 podman[260203]: 2025-12-04 10:51:28.658551149 +0000 UTC m=+0.184399599 container remove 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 04 10:51:28 compute-0 systemd[1]: libpod-conmon-2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a.scope: Deactivated successfully.
Dec 04 10:51:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 04 10:51:28 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "format": "json"}]: dispatch
Dec 04 10:51:28 compute-0 ceph-mon[75358]: pgmap v1265: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 86 KiB/s wr, 5 op/s
Dec 04 10:51:28 compute-0 podman[260243]: 2025-12-04 10:51:28.829219359 +0000 UTC m=+0.040551239 container create 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 04 10:51:28 compute-0 systemd[1]: Started libpod-conmon-9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c.scope.
Dec 04 10:51:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:51:28 compute-0 podman[260243]: 2025-12-04 10:51:28.81304543 +0000 UTC m=+0.024377330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:51:28 compute-0 podman[260243]: 2025-12-04 10:51:28.914403535 +0000 UTC m=+0.125735435 container init 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:51:28 compute-0 podman[260243]: 2025-12-04 10:51:28.921306086 +0000 UTC m=+0.132637976 container start 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:51:28 compute-0 podman[260243]: 2025-12-04 10:51:28.925730274 +0000 UTC m=+0.137062154 container attach 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:51:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:51:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:51:29 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 4 op/s
Dec 04 10:51:29 compute-0 lvm[260340]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:51:29 compute-0 lvm[260341]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:51:29 compute-0 lvm[260341]: VG ceph_vg1 finished
Dec 04 10:51:29 compute-0 lvm[260340]: VG ceph_vg0 finished
Dec 04 10:51:29 compute-0 lvm[260343]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:51:29 compute-0 lvm[260343]: VG ceph_vg2 finished
Dec 04 10:51:29 compute-0 recursing_shtern[260260]: {}
Dec 04 10:51:29 compute-0 systemd[1]: libpod-9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c.scope: Deactivated successfully.
Dec 04 10:51:29 compute-0 systemd[1]: libpod-9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c.scope: Consumed 1.494s CPU time.
Dec 04 10:51:29 compute-0 podman[260243]: 2025-12-04 10:51:29.843325385 +0000 UTC m=+1.054657265 container died 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:51:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253-merged.mount: Deactivated successfully.
Dec 04 10:51:29 compute-0 podman[260243]: 2025-12-04 10:51:29.889569593 +0000 UTC m=+1.100901483 container remove 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:51:29 compute-0 systemd[1]: libpod-conmon-9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c.scope: Deactivated successfully.
Dec 04 10:51:29 compute-0 sudo[260166]: pam_unix(sudo:session): session closed for user root
Dec 04 10:51:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:51:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:51:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:51:29 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:51:30 compute-0 sudo[260357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:51:30 compute-0 sudo[260357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:51:30 compute-0 sudo[260357]: pam_unix(sudo:session): session closed for user root
Dec 04 10:51:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:30 compute-0 sshd-session[260276]: Connection closed by authenticating user root 120.48.35.4 port 33018 [preauth]
Dec 04 10:51:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "format": "json"}]: dispatch
Dec 04 10:51:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:772141e6-25b5-4706-b9c3-ba13ee143838, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:772141e6-25b5-4706-b9c3-ba13ee143838, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:30 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '772141e6-25b5-4706-b9c3-ba13ee143838' of type subvolume
Dec 04 10:51:30 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:30.709+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '772141e6-25b5-4706-b9c3-ba13ee143838' of type subvolume
Dec 04 10:51:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec 04 10:51:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838'' moved to trashcan
Dec 04 10:51:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:51:30 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec 04 10:51:30 compute-0 ceph-mon[75358]: pgmap v1266: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 4 op/s
Dec 04 10:51:30 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:51:30 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:51:31 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 80 KiB/s wr, 4 op/s
Dec 04 10:51:31 compute-0 sshd-session[260382]: Connection closed by authenticating user root 120.48.35.4 port 35900 [preauth]
Dec 04 10:51:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "format": "json"}]: dispatch
Dec 04 10:51:32 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:33 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 58 KiB/s wr, 4 op/s
Dec 04 10:51:33 compute-0 ceph-mon[75358]: pgmap v1267: 321 pgs: 321 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 80 KiB/s wr, 4 op/s
Dec 04 10:51:34 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "snap_name": "2c9f33a3-8987-4579-986d-04d3f23eb0e2_d4dcbd7c-4c46-40c4-8e22-44ffaaee1088", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:34 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2_d4dcbd7c-4c46-40c4-8e22-44ffaaee1088, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:51:35 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:35 compute-0 ceph-mon[75358]: pgmap v1268: 321 pgs: 321 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 58 KiB/s wr, 4 op/s
Dec 04 10:51:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp'
Dec 04 10:51:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp' to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta'
Dec 04 10:51:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2_d4dcbd7c-4c46-40c4-8e22-44ffaaee1088, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:51:35 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "snap_name": "2c9f33a3-8987-4579-986d-04d3f23eb0e2", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:51:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp'
Dec 04 10:51:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp' to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta'
Dec 04 10:51:35 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:51:35 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 3 op/s
Dec 04 10:51:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "snap_name": "2c9f33a3-8987-4579-986d-04d3f23eb0e2_d4dcbd7c-4c46-40c4-8e22-44ffaaee1088", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:36 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "snap_name": "2c9f33a3-8987-4579-986d-04d3f23eb0e2", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:36 compute-0 ceph-mon[75358]: pgmap v1269: 321 pgs: 321 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 3 op/s
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660926060230384 of space, bias 1.0, pg target 0.19982778180691152 quantized to 32 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006045553998927892 of space, bias 4.0, pg target 0.725466479871347 quantized to 16 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:51:37 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 KiB/s wr, 5 op/s
Dec 04 10:51:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "format": "json"}]: dispatch
Dec 04 10:51:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 04 10:51:38 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:38.288+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48e0e8d9-0ebb-4db4-a173-73e6b17560ed' of type subvolume
Dec 04 10:51:38 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48e0e8d9-0ebb-4db4-a173-73e6b17560ed' of type subvolume
Dec 04 10:51:38 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:51:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed'' moved to trashcan
Dec 04 10:51:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 04 10:51:38 compute-0 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec 04 10:51:38 compute-0 ceph-mon[75358]: pgmap v1270: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 KiB/s wr, 5 op/s
Dec 04 10:51:39 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 3 op/s
Dec 04 10:51:39 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "format": "json"}]: dispatch
Dec 04 10:51:39 compute-0 ceph-mon[75358]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "force": true, "format": "json"}]: dispatch
Dec 04 10:51:39 compute-0 ceph-mon[75358]: pgmap v1271: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 3 op/s
Dec 04 10:51:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Dec 04 10:51:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Dec 04 10:51:40 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Dec 04 10:51:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 195 B/s rd, 56 KiB/s wr, 4 op/s
Dec 04 10:51:42 compute-0 ceph-mon[75358]: osdmap e165: 3 total, 3 up, 3 in
Dec 04 10:51:43 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:51:43.070 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:51:43 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:51:43.071 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:51:43 compute-0 ceph-mon[75358]: pgmap v1273: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 195 B/s rd, 56 KiB/s wr, 4 op/s
Dec 04 10:51:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 585 B/s rd, 57 KiB/s wr, 5 op/s
Dec 04 10:51:45 compute-0 ceph-mon[75358]: pgmap v1274: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 585 B/s rd, 57 KiB/s wr, 5 op/s
Dec 04 10:51:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Dec 04 10:51:45 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Dec 04 10:51:45 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Dec 04 10:51:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 481 B/s rd, 32 KiB/s wr, 3 op/s
Dec 04 10:51:46 compute-0 ceph-mon[75358]: osdmap e166: 3 total, 3 up, 3 in
Dec 04 10:51:47 compute-0 ceph-mon[75358]: pgmap v1276: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 481 B/s rd, 32 KiB/s wr, 3 op/s
Dec 04 10:51:47 compute-0 podman[260386]: 2025-12-04 10:51:47.973072796 +0000 UTC m=+0.072085885 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 04 10:51:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 481 B/s rd, 43 KiB/s wr, 3 op/s
Dec 04 10:51:48 compute-0 nova_compute[244644]: 2025-12-04 10:51:48.334 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:51:48 compute-0 nova_compute[244644]: 2025-12-04 10:51:48.377 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:51:49 compute-0 nova_compute[244644]: 2025-12-04 10:51:49.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:51:49 compute-0 nova_compute[244644]: 2025-12-04 10:51:49.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:51:49 compute-0 nova_compute[244644]: 2025-12-04 10:51:49.341 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:51:49 compute-0 nova_compute[244644]: 2025-12-04 10:51:49.360 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:51:49 compute-0 ceph-mon[75358]: pgmap v1277: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 481 B/s rd, 43 KiB/s wr, 3 op/s
Dec 04 10:51:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 441 B/s rd, 39 KiB/s wr, 3 op/s
Dec 04 10:51:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:50 compute-0 nova_compute[244644]: 2025-12-04 10:51:50.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:51:50 compute-0 nova_compute[244644]: 2025-12-04 10:51:50.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:51:50 compute-0 nova_compute[244644]: 2025-12-04 10:51:50.371 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:51:50 compute-0 nova_compute[244644]: 2025-12-04 10:51:50.372 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:51:50 compute-0 nova_compute[244644]: 2025-12-04 10:51:50.372 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:51:50 compute-0 nova_compute[244644]: 2025-12-04 10:51:50.372 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:51:50 compute-0 nova_compute[244644]: 2025-12-04 10:51:50.373 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:51:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:51:50 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778203738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.020 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.177 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.178 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5002MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.179 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.179 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.246 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.246 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.260 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:51:51 compute-0 ceph-mon[75358]: pgmap v1278: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 441 B/s rd, 39 KiB/s wr, 3 op/s
Dec 04 10:51:51 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3778203738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:51:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:51:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1439446977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.820 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.827 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.843 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.845 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:51:51 compute-0 nova_compute[244644]: 2025-12-04 10:51:51.845 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:51:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 10 KiB/s wr, 2 op/s
Dec 04 10:51:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1439446977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:51:53 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:51:53.073 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:51:53 compute-0 ceph-mon[75358]: pgmap v1279: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 10 KiB/s wr, 2 op/s
Dec 04 10:51:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s wr, 0 op/s
Dec 04 10:51:54 compute-0 ceph-mon[75358]: pgmap v1280: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s wr, 0 op/s
Dec 04 10:51:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:51:54.920 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:51:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:51:54.921 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:51:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:51:54.921 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:51:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:51:55 compute-0 nova_compute[244644]: 2025-12-04 10:51:55.845 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:51:55 compute-0 nova_compute[244644]: 2025-12-04 10:51:55.846 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:51:55 compute-0 nova_compute[244644]: 2025-12-04 10:51:55.846 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:51:55 compute-0 nova_compute[244644]: 2025-12-04 10:51:55.846 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:51:55 compute-0 nova_compute[244644]: 2025-12-04 10:51:55.846 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:51:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 0 op/s
Dec 04 10:51:56 compute-0 podman[260452]: 2025-12-04 10:51:56.99002671 +0000 UTC m=+0.054692397 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:51:57 compute-0 podman[260451]: 2025-12-04 10:51:57.016267445 +0000 UTC m=+0.083394953 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 04 10:51:57 compute-0 ceph-mon[75358]: pgmap v1281: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 0 op/s
Dec 04 10:51:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:51:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:51:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s wr, 0 op/s
Dec 04 10:51:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:51:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:51:59 compute-0 ceph-mon[75358]: pgmap v1282: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s wr, 0 op/s
Dec 04 10:51:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:51:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:51:59 compute-0 nova_compute[244644]: 2025-12-04 10:51:59.340 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:01 compute-0 ceph-mon[75358]: pgmap v1283: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:02 compute-0 sshd-session[260385]: error: kex_exchange_identification: read: Connection timed out
Dec 04 10:52:02 compute-0 sshd-session[260385]: banner exchange: Connection from 120.48.35.4 port 36302: Connection timed out
Dec 04 10:52:03 compute-0 ceph-mon[75358]: pgmap v1284: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:03 compute-0 sshd-session[260495]: Invalid user customer from 107.175.213.239 port 60262
Dec 04 10:52:03 compute-0 sshd-session[260495]: Received disconnect from 107.175.213.239 port 60262:11: Bye Bye [preauth]
Dec 04 10:52:03 compute-0 sshd-session[260495]: Disconnected from invalid user customer 107.175.213.239 port 60262 [preauth]
Dec 04 10:52:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:07 compute-0 ceph-mon[75358]: pgmap v1285: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:08 compute-0 ceph-mon[75358]: pgmap v1286: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:09 compute-0 ceph-mon[75358]: pgmap v1287: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:11 compute-0 sshd-session[260497]: Invalid user customer from 101.47.163.20 port 48756
Dec 04 10:52:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:52:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/497393934' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:52:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:52:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/497393934' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:52:11 compute-0 sshd-session[260497]: Received disconnect from 101.47.163.20 port 48756:11: Bye Bye [preauth]
Dec 04 10:52:11 compute-0 sshd-session[260497]: Disconnected from invalid user customer 101.47.163.20 port 48756 [preauth]
Dec 04 10:52:11 compute-0 ceph-mon[75358]: pgmap v1288: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/497393934' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:52:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/497393934' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:52:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:13 compute-0 ceph-mon[75358]: pgmap v1289: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:15 compute-0 ceph-mon[75358]: pgmap v1290: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:17 compute-0 ceph-mon[75358]: pgmap v1291: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:18 compute-0 podman[260499]: 2025-12-04 10:52:18.962880042 +0000 UTC m=+0.071847639 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 04 10:52:19 compute-0 ceph-mon[75358]: pgmap v1292: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:21 compute-0 ceph-mon[75358]: pgmap v1293: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:23 compute-0 ceph-mon[75358]: pgmap v1294: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:25 compute-0 ceph-mon[75358]: pgmap v1295: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:52:26
Dec 04 10:52:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:52:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:52:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', 'volumes']
Dec 04 10:52:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:52:27 compute-0 ceph-mon[75358]: pgmap v1296: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:27 compute-0 podman[260520]: 2025-12-04 10:52:27.968148708 +0000 UTC m=+0.075367696 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 04 10:52:27 compute-0 podman[260519]: 2025-12-04 10:52:27.977029996 +0000 UTC m=+0.086839268 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:52:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:52:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:52:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:52:29 compute-0 ceph-mon[75358]: pgmap v1297: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:30 compute-0 sudo[260562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:52:30 compute-0 sudo[260562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:52:30 compute-0 sudo[260562]: pam_unix(sudo:session): session closed for user root
Dec 04 10:52:30 compute-0 sudo[260587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:52:30 compute-0 sudo[260587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:52:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:30 compute-0 sudo[260587]: pam_unix(sudo:session): session closed for user root
Dec 04 10:52:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:52:30 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:52:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:52:30 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:52:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:52:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:52:31 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:52:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:52:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:52:31 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:52:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:52:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:52:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:52:31 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:52:31 compute-0 sudo[260642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:52:31 compute-0 sudo[260642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:52:31 compute-0 sudo[260642]: pam_unix(sudo:session): session closed for user root
Dec 04 10:52:31 compute-0 sudo[260667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:52:31 compute-0 sudo[260667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:52:31 compute-0 podman[260704]: 2025-12-04 10:52:31.632484645 +0000 UTC m=+0.024596767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:52:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:32 compute-0 podman[260704]: 2025-12-04 10:52:32.643525146 +0000 UTC m=+1.035637228 container create 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:52:33 compute-0 systemd[1]: Started libpod-conmon-37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c.scope.
Dec 04 10:52:33 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:52:33 compute-0 ceph-mon[75358]: pgmap v1298: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:52:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:52:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:52:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:52:33 compute-0 podman[260704]: 2025-12-04 10:52:33.103284811 +0000 UTC m=+1.495396923 container init 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:52:33 compute-0 podman[260704]: 2025-12-04 10:52:33.113087692 +0000 UTC m=+1.505199774 container start 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:52:33 compute-0 pensive_ritchie[260721]: 167 167
Dec 04 10:52:33 compute-0 systemd[1]: libpod-37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c.scope: Deactivated successfully.
Dec 04 10:52:33 compute-0 podman[260704]: 2025-12-04 10:52:33.414575733 +0000 UTC m=+1.806687845 container attach 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:52:33 compute-0 podman[260704]: 2025-12-04 10:52:33.416847058 +0000 UTC m=+1.808959140 container died 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:52:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1771b206f30bf7be91d3b0213eac521dcdc6fb30bbb5a858a71a75f44875822-merged.mount: Deactivated successfully.
Dec 04 10:52:34 compute-0 ceph-mon[75358]: pgmap v1299: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:34 compute-0 podman[260704]: 2025-12-04 10:52:34.947252231 +0000 UTC m=+3.339364313 container remove 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 10:52:34 compute-0 systemd[1]: libpod-conmon-37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c.scope: Deactivated successfully.
Dec 04 10:52:35 compute-0 podman[260746]: 2025-12-04 10:52:35.129575777 +0000 UTC m=+0.055703661 container create 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:52:35 compute-0 systemd[1]: Started libpod-conmon-97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890.scope.
Dec 04 10:52:35 compute-0 podman[260746]: 2025-12-04 10:52:35.100303038 +0000 UTC m=+0.026430942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:52:35 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:35 compute-0 podman[260746]: 2025-12-04 10:52:35.29140542 +0000 UTC m=+0.217533324 container init 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:52:35 compute-0 podman[260746]: 2025-12-04 10:52:35.303611351 +0000 UTC m=+0.229739275 container start 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:52:35 compute-0 podman[260746]: 2025-12-04 10:52:35.336812767 +0000 UTC m=+0.262940671 container attach 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 10:52:35 compute-0 ceph-mon[75358]: pgmap v1300: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:35 compute-0 vibrant_hertz[260762]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:52:35 compute-0 vibrant_hertz[260762]: --> All data devices are unavailable
Dec 04 10:52:35 compute-0 systemd[1]: libpod-97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890.scope: Deactivated successfully.
Dec 04 10:52:35 compute-0 podman[260746]: 2025-12-04 10:52:35.856689422 +0000 UTC m=+0.782817306 container died 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 04 10:52:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806-merged.mount: Deactivated successfully.
Dec 04 10:52:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:36 compute-0 podman[260746]: 2025-12-04 10:52:36.278376559 +0000 UTC m=+1.204504443 container remove 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:52:36 compute-0 systemd[1]: libpod-conmon-97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890.scope: Deactivated successfully.
Dec 04 10:52:36 compute-0 sudo[260667]: pam_unix(sudo:session): session closed for user root
Dec 04 10:52:36 compute-0 sudo[260795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:52:36 compute-0 sudo[260795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:52:36 compute-0 sudo[260795]: pam_unix(sudo:session): session closed for user root
Dec 04 10:52:36 compute-0 sudo[260820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:52:36 compute-0 sudo[260820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:52:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:36 compute-0 podman[260857]: 2025-12-04 10:52:36.834576046 +0000 UTC m=+0.109410683 container create fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:52:36 compute-0 podman[260857]: 2025-12-04 10:52:36.756450283 +0000 UTC m=+0.031284960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:52:36 compute-0 systemd[1]: Started libpod-conmon-fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1.scope.
Dec 04 10:52:36 compute-0 ceph-mon[75358]: pgmap v1301: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:36 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:52:36 compute-0 podman[260857]: 2025-12-04 10:52:36.939935589 +0000 UTC m=+0.214770216 container init fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:52:36 compute-0 podman[260857]: 2025-12-04 10:52:36.948660694 +0000 UTC m=+0.223495291 container start fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:52:36 compute-0 podman[260857]: 2025-12-04 10:52:36.953501903 +0000 UTC m=+0.228336540 container attach fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 04 10:52:36 compute-0 gifted_bohr[260873]: 167 167
Dec 04 10:52:36 compute-0 systemd[1]: libpod-fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1.scope: Deactivated successfully.
Dec 04 10:52:36 compute-0 podman[260857]: 2025-12-04 10:52:36.955701597 +0000 UTC m=+0.230536194 container died fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 04 10:52:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-87566954bff5e4b129eea4a60436009a47d65ef463aa126a06b6887c962ccbf0-merged.mount: Deactivated successfully.
Dec 04 10:52:37 compute-0 podman[260857]: 2025-12-04 10:52:37.01025758 +0000 UTC m=+0.285092167 container remove fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:52:37 compute-0 systemd[1]: libpod-conmon-fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1.scope: Deactivated successfully.
Dec 04 10:52:37 compute-0 podman[260897]: 2025-12-04 10:52:37.217373457 +0000 UTC m=+0.056447891 container create 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:52:37 compute-0 systemd[1]: Started libpod-conmon-6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f.scope.
Dec 04 10:52:37 compute-0 podman[260897]: 2025-12-04 10:52:37.188849835 +0000 UTC m=+0.027924309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:52:37 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:52:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:37 compute-0 podman[260897]: 2025-12-04 10:52:37.325641591 +0000 UTC m=+0.164716035 container init 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:52:37 compute-0 podman[260897]: 2025-12-04 10:52:37.337237576 +0000 UTC m=+0.176312010 container start 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 10:52:37 compute-0 podman[260897]: 2025-12-04 10:52:37.341628434 +0000 UTC m=+0.180702878 container attach 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150786373821282 of space, bias 4.0, pg target 0.7380943648585538 quantized to 16 (current 32)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:52:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]: {
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:     "0": [
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:         {
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "devices": [
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "/dev/loop3"
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             ],
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_name": "ceph_lv0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_size": "21470642176",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "name": "ceph_lv0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "tags": {
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.cluster_name": "ceph",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.crush_device_class": "",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.encrypted": "0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.objectstore": "bluestore",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.osd_id": "0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.type": "block",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.vdo": "0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.with_tpm": "0"
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             },
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "type": "block",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "vg_name": "ceph_vg0"
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:         }
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:     ],
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:     "1": [
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:         {
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "devices": [
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "/dev/loop4"
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             ],
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_name": "ceph_lv1",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_size": "21470642176",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "name": "ceph_lv1",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "tags": {
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.cluster_name": "ceph",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.crush_device_class": "",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.encrypted": "0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.objectstore": "bluestore",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.osd_id": "1",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.type": "block",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.vdo": "0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.with_tpm": "0"
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             },
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "type": "block",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "vg_name": "ceph_vg1"
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:         }
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:     ],
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:     "2": [
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:         {
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "devices": [
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "/dev/loop5"
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             ],
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_name": "ceph_lv2",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_size": "21470642176",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "name": "ceph_lv2",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "tags": {
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.cluster_name": "ceph",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.crush_device_class": "",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.encrypted": "0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.objectstore": "bluestore",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.osd_id": "2",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.type": "block",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.vdo": "0",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:                 "ceph.with_tpm": "0"
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             },
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "type": "block",
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:             "vg_name": "ceph_vg2"
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:         }
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]:     ]
Dec 04 10:52:37 compute-0 vibrant_vaughan[260913]: }
Dec 04 10:52:37 compute-0 systemd[1]: libpod-6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f.scope: Deactivated successfully.
Dec 04 10:52:37 compute-0 podman[260897]: 2025-12-04 10:52:37.698183599 +0000 UTC m=+0.537258093 container died 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:52:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4-merged.mount: Deactivated successfully.
Dec 04 10:52:37 compute-0 podman[260897]: 2025-12-04 10:52:37.747753989 +0000 UTC m=+0.586828423 container remove 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 04 10:52:37 compute-0 systemd[1]: libpod-conmon-6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f.scope: Deactivated successfully.
Dec 04 10:52:37 compute-0 sudo[260820]: pam_unix(sudo:session): session closed for user root
Dec 04 10:52:37 compute-0 sudo[260934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:52:37 compute-0 sudo[260934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:52:37 compute-0 sudo[260934]: pam_unix(sudo:session): session closed for user root
Dec 04 10:52:37 compute-0 sudo[260959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:52:37 compute-0 sudo[260959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:52:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:38 compute-0 podman[260995]: 2025-12-04 10:52:38.255008413 +0000 UTC m=+0.045251126 container create ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 04 10:52:38 compute-0 systemd[1]: Started libpod-conmon-ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d.scope.
Dec 04 10:52:38 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:52:38 compute-0 podman[260995]: 2025-12-04 10:52:38.236166128 +0000 UTC m=+0.026408881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:52:38 compute-0 podman[260995]: 2025-12-04 10:52:38.343664414 +0000 UTC m=+0.133907137 container init ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 10:52:38 compute-0 podman[260995]: 2025-12-04 10:52:38.397705724 +0000 UTC m=+0.187948447 container start ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:52:38 compute-0 podman[260995]: 2025-12-04 10:52:38.401691952 +0000 UTC m=+0.191934785 container attach ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 04 10:52:38 compute-0 nice_tesla[261012]: 167 167
Dec 04 10:52:38 compute-0 systemd[1]: libpod-ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d.scope: Deactivated successfully.
Dec 04 10:52:38 compute-0 conmon[261012]: conmon ce2ab492ff3d4056f417 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d.scope/container/memory.events
Dec 04 10:52:38 compute-0 podman[260995]: 2025-12-04 10:52:38.407232188 +0000 UTC m=+0.197474911 container died ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab39e0a9b4fd886286e8703f3b43d179c3f084de2d4b2b4e019e00305ed92c5b-merged.mount: Deactivated successfully.
Dec 04 10:52:38 compute-0 podman[260995]: 2025-12-04 10:52:38.448227958 +0000 UTC m=+0.238470681 container remove ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 04 10:52:38 compute-0 systemd[1]: libpod-conmon-ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d.scope: Deactivated successfully.
Dec 04 10:52:38 compute-0 sshd-session[261016]: Accepted publickey for zuul from 192.168.122.10 port 42952 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:52:38 compute-0 systemd-logind[798]: New session 52 of user zuul.
Dec 04 10:52:38 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 04 10:52:38 compute-0 sshd-session[261016]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:52:38 compute-0 sudo[261052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 04 10:52:38 compute-0 sudo[261052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:52:38 compute-0 podman[261039]: 2025-12-04 10:52:38.604210466 +0000 UTC m=+0.029147569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:52:38 compute-0 podman[261039]: 2025-12-04 10:52:38.804166707 +0000 UTC m=+0.229103790 container create 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:52:39 compute-0 ceph-mon[75358]: pgmap v1302: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:39 compute-0 systemd[1]: Started libpod-conmon-34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2.scope.
Dec 04 10:52:39 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:52:39 compute-0 podman[261039]: 2025-12-04 10:52:39.772380654 +0000 UTC m=+1.197317737 container init 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:52:39 compute-0 podman[261039]: 2025-12-04 10:52:39.78360501 +0000 UTC m=+1.208542113 container start 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:52:39 compute-0 podman[261039]: 2025-12-04 10:52:39.788187413 +0000 UTC m=+1.213124496 container attach 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 04 10:52:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:40 compute-0 lvm[261240]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:52:40 compute-0 lvm[261240]: VG ceph_vg0 finished
Dec 04 10:52:40 compute-0 lvm[261241]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:52:40 compute-0 lvm[261241]: VG ceph_vg1 finished
Dec 04 10:52:40 compute-0 lvm[261243]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:52:40 compute-0 lvm[261243]: VG ceph_vg2 finished
Dec 04 10:52:40 compute-0 optimistic_dubinsky[261091]: {}
Dec 04 10:52:40 compute-0 systemd[1]: libpod-34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2.scope: Deactivated successfully.
Dec 04 10:52:40 compute-0 systemd[1]: libpod-34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2.scope: Consumed 1.596s CPU time.
Dec 04 10:52:40 compute-0 podman[261039]: 2025-12-04 10:52:40.767036852 +0000 UTC m=+2.191973915 container died 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0-merged.mount: Deactivated successfully.
Dec 04 10:52:40 compute-0 podman[261039]: 2025-12-04 10:52:40.850116716 +0000 UTC m=+2.275053779 container remove 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 04 10:52:40 compute-0 systemd[1]: libpod-conmon-34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2.scope: Deactivated successfully.
Dec 04 10:52:40 compute-0 sudo[260959]: pam_unix(sudo:session): session closed for user root
Dec 04 10:52:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:52:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:52:40 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:52:40 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:52:41 compute-0 sudo[261290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:52:41 compute-0 sudo[261290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:52:41 compute-0 sudo[261290]: pam_unix(sudo:session): session closed for user root
Dec 04 10:52:41 compute-0 ceph-mon[75358]: pgmap v1303: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:41 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:52:41 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:52:41 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:42 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14528 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:42 compute-0 ceph-mon[75358]: from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 04 10:52:43 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269093128' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 04 10:52:44 compute-0 ceph-mon[75358]: pgmap v1304: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:44 compute-0 ceph-mon[75358]: from='client.14528 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:44 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1269093128' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 04 10:52:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:45 compute-0 ceph-mon[75358]: pgmap v1305: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:46 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:47 compute-0 ceph-mon[75358]: pgmap v1306: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:48 compute-0 ovs-vsctl[261505]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 04 10:52:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:48 compute-0 nova_compute[244644]: 2025-12-04 10:52:48.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:49 compute-0 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 04 10:52:49 compute-0 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 04 10:52:49 compute-0 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 04 10:52:49 compute-0 nova_compute[244644]: 2025-12-04 10:52:49.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:49 compute-0 nova_compute[244644]: 2025-12-04 10:52:49.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:52:49 compute-0 nova_compute[244644]: 2025-12-04 10:52:49.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:52:49 compute-0 nova_compute[244644]: 2025-12-04 10:52:49.392 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:52:49 compute-0 ceph-mon[75358]: pgmap v1307: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:49 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: cache status {prefix=cache status} (starting...)
Dec 04 10:52:49 compute-0 podman[261771]: 2025-12-04 10:52:49.663159292 +0000 UTC m=+0.067395730 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:52:49 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: client ls {prefix=client ls} (starting...)
Dec 04 10:52:50 compute-0 lvm[261884]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:52:50 compute-0 lvm[261884]: VG ceph_vg1 finished
Dec 04 10:52:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:50 compute-0 lvm[261895]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:52:50 compute-0 lvm[261895]: VG ceph_vg0 finished
Dec 04 10:52:50 compute-0 lvm[261901]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:52:50 compute-0 lvm[261901]: VG ceph_vg2 finished
Dec 04 10:52:50 compute-0 nova_compute[244644]: 2025-12-04 10:52:50.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14532 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:50 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: damage ls {prefix=damage ls} (starting...)
Dec 04 10:52:50 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump loads {prefix=dump loads} (starting...)
Dec 04 10:52:50 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 04 10:52:50 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14534 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:50 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 04 10:52:51 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 04 10:52:51 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 04 10:52:51 compute-0 nova_compute[244644]: 2025-12-04 10:52:51.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14538 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:51 compute-0 nova_compute[244644]: 2025-12-04 10:52:51.381 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:52:51 compute-0 nova_compute[244644]: 2025-12-04 10:52:51.381 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:52:51 compute-0 nova_compute[244644]: 2025-12-04 10:52:51.382 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:52:51 compute-0 nova_compute[244644]: 2025-12-04 10:52:51.382 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:52:51 compute-0 nova_compute[244644]: 2025-12-04 10:52:51.382 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:52:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 04 10:52:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380433314' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 04 10:52:51 compute-0 ceph-mon[75358]: pgmap v1308: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:51 compute-0 ceph-mon[75358]: from='client.14532 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:51 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 04 10:52:51 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 04 10:52:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:51 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14540 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:51 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:52:51.945+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 04 10:52:51 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 04 10:52:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:52:52 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428355298' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:52:52 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: ops {prefix=ops} (starting...)
Dec 04 10:52:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:52:52 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2274008835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:52:52 compute-0 nova_compute[244644]: 2025-12-04 10:52:52.091 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.709s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:52:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:52 compute-0 nova_compute[244644]: 2025-12-04 10:52:52.273 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:52:52 compute-0 nova_compute[244644]: 2025-12-04 10:52:52.274 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4873MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:52:52 compute-0 nova_compute[244644]: 2025-12-04 10:52:52.274 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:52:52 compute-0 nova_compute[244644]: 2025-12-04 10:52:52.274 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:52:52 compute-0 nova_compute[244644]: 2025-12-04 10:52:52.382 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:52:52 compute-0 nova_compute[244644]: 2025-12-04 10:52:52.383 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:52:52 compute-0 nova_compute[244644]: 2025-12-04 10:52:52.402 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:52:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 04 10:52:52 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1207267331' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 04 10:52:52 compute-0 ceph-mon[75358]: from='client.14534 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:52 compute-0 ceph-mon[75358]: from='client.14538 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1380433314' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 04 10:52:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3428355298' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:52:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2274008835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:52:52 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1207267331' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 04 10:52:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 04 10:52:52 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/545602561' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 04 10:52:52 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session ls {prefix=session ls} (starting...)
Dec 04 10:52:52 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: status {prefix=status} (starting...)
Dec 04 10:52:52 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:52:52 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/597533027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:52:53 compute-0 nova_compute[244644]: 2025-12-04 10:52:53.016 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:52:53 compute-0 nova_compute[244644]: 2025-12-04 10:52:53.022 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:52:53 compute-0 nova_compute[244644]: 2025-12-04 10:52:53.050 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:52:53 compute-0 nova_compute[244644]: 2025-12-04 10:52:53.052 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:52:53 compute-0 nova_compute[244644]: 2025-12-04 10:52:53.052 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:52:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 04 10:52:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702401482' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 04 10:52:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 04 10:52:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138267846' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 04 10:52:53 compute-0 ceph-mon[75358]: from='client.14540 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:53 compute-0 ceph-mon[75358]: pgmap v1309: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:53 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/545602561' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 04 10:52:53 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/597533027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:52:53 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/702401482' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 04 10:52:53 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/138267846' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 04 10:52:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 04 10:52:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015691080' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 04 10:52:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14558 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14561 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 04 10:52:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491425011' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 04 10:52:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1015691080' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 04 10:52:54 compute-0 ceph-mon[75358]: from='client.14558 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3491425011' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 04 10:52:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 04 10:52:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828374942' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 04 10:52:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 04 10:52:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639261421' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 04 10:52:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:52:54.921 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:52:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:52:54.922 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:52:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:52:54.922 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:52:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 04 10:52:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389546349' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 04 10:52:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 04 10:52:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545059339' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 04 10:52:55 compute-0 ceph-mon[75358]: from='client.14561 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:55 compute-0 ceph-mon[75358]: pgmap v1310: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2828374942' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 04 10:52:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3639261421' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 04 10:52:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1389546349' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 04 10:52:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2545059339' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 04 10:52:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14572 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:55 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:52:55.754+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 04 10:52:55 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 04 10:52:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 04 10:52:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154090365' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 04 10:52:56 compute-0 nova_compute[244644]: 2025-12-04 10:52:56.053 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:56 compute-0 nova_compute[244644]: 2025-12-04 10:52:56.053 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:56 compute-0 nova_compute[244644]: 2025-12-04 10:52:56.054 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:56 compute-0 nova_compute[244644]: 2025-12-04 10:52:56.054 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:52:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14578 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:56 compute-0 nova_compute[244644]: 2025-12-04 10:52:56.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 04 10:52:56 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784620337' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 04 10:52:56 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1154090365' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 04 10:52:56 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/784620337' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 04 10:52:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14580 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:11.102000+0000 osd.2 (osd.2) 177 : cluster [DBG] 11.d scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 1605632 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:42.855205+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919620 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1597440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:43.855375+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1597440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:44.855514+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1597440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:45.857546+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:15.102536+0000 osd.2 (osd.2) 178 : cluster [DBG] 8.11 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:15.116650+0000 osd.2 (osd.2) 179 : cluster [DBG] 8.11 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 179)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:15.102536+0000 osd.2 (osd.2) 178 : cluster [DBG] 8.11 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:15.116650+0000 osd.2 (osd.2) 179 : cluster [DBG] 8.11 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 1572864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:46.857768+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:16.069049+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.8 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:16.104477+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.8 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 181)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:16.069049+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.8 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:16.104477+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.8 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 1564672 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:47.858009+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924444 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1556480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:48.858170+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1556480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:49.858306+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1556480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:50.858569+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:20.013818+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.e scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:20.052668+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.e scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 183)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:20.013818+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.e scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:20.052668+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.e scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 1548288 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:51.859170+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.960282326s of 10.977203369s, submitted: 8
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 1523712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:52.859305+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:22.068755+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.18 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:22.100526+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.18 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 185)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:22.068755+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.18 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:22.100526+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.18 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931681 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 1523712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:53.859528+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:23.048360+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.13 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:23.080164+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.13 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 187)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:23.048360+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.13 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:23.080164+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.13 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 1515520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:54.859708+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 1499136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:55.859815+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:56.859991+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 1507328 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:57.860184+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 1499136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934094 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:58.860328+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:28.061164+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.19 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:28.103504+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.19 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 1499136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 189)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:28.061164+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.19 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:28.103504+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.19 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:19:59.860561+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:29.085003+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.6 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:29.116922+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.6 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 1499136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 191)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:29.085003+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.6 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:29.116922+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.6 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:00.860779+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 1490944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:01.860956+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 1490944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.030856133s of 10.057851791s, submitted: 8
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:02.861125+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:32.126506+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.7 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:32.161141+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.7 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1482752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 193)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:32.126506+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.7 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:32.161141+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.7 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938916 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:03.861332+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1482752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:04.861459+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1474560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:05.861583+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:35.065977+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.c scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:35.094199+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.c scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1474560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 195)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:35.065977+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.c scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:35.094199+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.c scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:06.861793+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:36.056632+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.f scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:36.095442+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.f scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1474560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 197)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:36.056632+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.f scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:36.095442+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.f scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:07.862073+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 1466368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943738 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:08.862157+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 1466368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:09.862284+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 1458176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:10.862399+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:40.054540+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.17 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  will send 2025-12-04T10:20:40.079273+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.17 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 1458176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client handle_log_ack log(last 199)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:40.054540+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.17 scrub starts
Dec 04 10:52:56 compute-0 ceph-osd[88205]: log_client  logged 2025-12-04T10:20:40.079273+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.17 scrub ok
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:11.862609+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1449984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:12.862865+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1449984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:13.863035+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1449984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:14.863176+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1441792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:15.863300+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1441792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:16.863488+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 1417216 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:17.863646+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1409024 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:18.863798+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1409024 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:19.863945+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 1400832 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:20.864092+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 1400832 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:21.864278+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1392640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:22.864628+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1392640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:23.864779+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1392640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:24.864935+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1384448 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:25.865069+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1384448 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:26.865235+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1376256 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:27.865402+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1376256 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:28.865605+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 1368064 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:29.865744+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 1368064 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:30.865863+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1359872 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:31.866027+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1359872 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:32.866182+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 1351680 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:33.866326+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 1351680 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:34.866486+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 1343488 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:35.866630+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 1343488 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:36.866769+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 1310720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:37.866913+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 1310720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:38.867044+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1302528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:39.867191+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1302528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:40.867316+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1302528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:41.867496+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 1294336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:42.867659+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 1294336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:43.867842+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1286144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:44.867994+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1286144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:45.868145+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1286144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:46.868284+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 1277952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:47.868472+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 1277952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:48.868602+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 1277952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:49.868768+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 1269760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:50.868878+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 1269760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:51.869130+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1253376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:52.869283+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1253376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:53.869510+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1245184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:54.869659+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1245184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:55.869803+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 1228800 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:56.869998+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 1220608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:57.870164+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1212416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:58.870357+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1212416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:59.870750+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1204224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:00.870935+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1204224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:01.871153+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1204224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:02.871294+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1196032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:03.871465+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1196032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:04.871671+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1196032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:05.871843+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1187840 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:06.872021+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1187840 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:07.872214+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1179648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:08.872389+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1179648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:09.872540+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1171456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:10.872694+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1171456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:11.872890+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1171456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:12.873036+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1163264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:13.873234+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1163264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:14.873430+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1163264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:15.873578+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1155072 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:16.873834+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1155072 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:17.874004+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:18.874147+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:19.874291+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:20.874425+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:21.874624+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:22.874780+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1138688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:23.874963+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1138688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:24.875186+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1138688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:25.875315+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1130496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:26.875435+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1130496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:27.875584+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1122304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:28.875718+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1122304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:29.875897+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1122304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:30.876092+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1114112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:31.876308+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1114112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:32.876444+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1105920 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:33.876580+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1105920 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:34.876759+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1105920 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:35.876893+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1097728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:36.877030+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1097728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:37.877219+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1097728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:38.877419+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1089536 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:39.877564+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1089536 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:40.877697+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 1081344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:41.877883+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 1081344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:42.890827+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 1081344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:43.891237+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 1073152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:44.891395+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 1073152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:45.891585+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 1073152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:46.891763+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 1064960 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:47.891914+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 1064960 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:48.892044+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 1056768 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:49.892192+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 1056768 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:50.892338+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 1048576 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:51.892512+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 1048576 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:52.892659+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 1048576 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:53.892849+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 1040384 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:54.892997+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 1040384 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:55.893174+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 1040384 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:56.893338+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 1032192 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:57.893518+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 1015808 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:58.893665+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 1007616 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:59.893865+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 1007616 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:00.894039+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 1007616 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:01.894220+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 999424 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:02.894340+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 999424 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:03.894505+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 991232 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:04.894644+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 991232 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:05.894786+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 991232 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:06.894937+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 983040 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:07.895125+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 983040 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:08.895263+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 974848 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:09.895390+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 974848 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:10.895562+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 966656 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:11.895742+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 958464 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:12.895932+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 958464 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:13.896074+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 950272 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:14.896264+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 950272 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:15.896452+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 950272 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:16.896583+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 942080 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:17.896717+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 942080 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:18.896875+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 933888 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:19.897070+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 933888 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:20.897273+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 933888 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:21.897446+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 925696 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:22.897604+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 925696 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:23.897796+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 925696 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:24.897937+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 917504 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:25.898094+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 917504 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:26.898300+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 909312 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:27.898445+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 909312 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:28.898628+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 901120 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:29.898806+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 901120 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:30.898966+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 901120 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:31.899197+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 892928 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:32.899327+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 892928 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:33.899463+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 884736 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:34.899597+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 884736 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:35.899750+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 876544 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:36.899910+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 868352 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:37.900158+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 868352 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:38.900302+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 868352 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:39.900536+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 860160 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:40.900701+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 860160 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:41.900926+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 860160 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:42.901070+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 851968 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:43.901180+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 851968 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:44.901357+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 843776 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:45.901499+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 843776 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:46.901714+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 835584 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:47.901878+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 835584 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:48.902018+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 827392 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:49.902151+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 827392 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:50.902275+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 827392 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:51.902458+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 827392 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:52.902617+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 819200 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:53.902759+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 819200 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:54.902906+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 811008 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:55.903032+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 811008 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:56.903239+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 802816 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:57.903396+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 802816 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:58.903581+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:59.903736+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:00.903892+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 811008 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:01.904078+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 811008 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:02.904258+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 802816 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:03.904374+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 802816 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:04.904502+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:05.904679+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:06.904882+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:07.905243+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 786432 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:08.905409+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 786432 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:09.905540+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 778240 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:10.905707+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 778240 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:11.905895+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 778240 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:12.906046+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 770048 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:13.906177+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 770048 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:14.906305+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 761856 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:15.906485+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 761856 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:16.906661+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 761856 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:17.906808+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 753664 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:18.906971+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 753664 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:19.907123+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 745472 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:20.907284+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 745472 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:21.907440+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 745472 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:22.907607+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 737280 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:23.907753+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 737280 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:24.907916+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 737280 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:25.908058+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 712704 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:26.908206+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:27.908347+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:28.908493+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:29.908634+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 696320 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:30.908790+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:31.908957+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:32.909062+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 696320 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:33.909200+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 696320 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:34.909387+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 688128 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:35.909517+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 688128 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:36.909675+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 688128 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:37.909841+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 679936 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:38.910013+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 679936 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:39.910166+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 671744 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:40.910409+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 671744 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:41.910656+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:42.910818+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:43.910983+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:44.911201+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:45.911367+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 655360 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:46.911531+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 655360 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:47.911693+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 647168 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:48.911815+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 647168 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:49.912011+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 647168 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:50.912168+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 638976 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:51.912374+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 638976 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:52.912518+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 630784 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:53.912702+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 630784 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:54.912854+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 622592 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:55.913005+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 622592 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:56.913194+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 622592 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:57.913358+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 614400 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:58.913507+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 614400 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:59.913643+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 614400 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:00.913827+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 598016 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:01.914005+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 598016 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:02.914161+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 589824 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:03.914344+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 589824 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:04.914486+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 589824 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:05.914682+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 581632 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:06.914825+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 581632 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:07.914973+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 573440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:08.915140+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 573440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:09.915312+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 573440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:10.915492+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 557056 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:11.915669+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 557056 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:12.915800+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 548864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:13.915931+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 548864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:14.916074+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 548864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:15.916190+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 540672 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:16.916356+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 540672 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:17.916509+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 532480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:18.916666+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 532480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:19.916835+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 532480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:20.917020+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 516096 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:21.917196+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 516096 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:22.917352+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 507904 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:23.917510+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 507904 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:24.917642+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 507904 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:25.917788+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 499712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:26.917931+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 499712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:27.918092+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 491520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:28.918249+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 491520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:29.918558+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 491520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:30.918766+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 475136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:31.918954+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 475136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:32.919114+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:33.919464+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:34.919637+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:35.919856+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 458752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:36.920066+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 458752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:37.920265+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 450560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:38.920418+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 450560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:39.920564+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 450560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:40.920711+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 442368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:41.920918+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 442368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:42.921082+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 434176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:43.921313+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 434176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:44.921474+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 434176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:45.921691+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 425984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:46.922166+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:47.922302+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 425984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:48.922467+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 425984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:49.922605+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 417792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:50.922746+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 417792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:51.922995+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 417792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:52.923184+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 409600 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:53.923338+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 409600 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:54.923467+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 401408 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:55.923611+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 401408 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:56.923778+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 393216 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:57.923919+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 393216 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:58.924081+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 385024 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:59.924235+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 376832 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:00.924416+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 376832 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:01.924646+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 368640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:02.924787+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 368640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:03.925009+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 368640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:04.925139+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 360448 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:05.925284+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 360448 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:06.925445+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 352256 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5475 writes, 24K keys, 5475 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5475 writes, 788 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5475 writes, 24K keys, 5475 commit groups, 1.0 writes per commit group, ingest: 18.45 MB, 0.03 MB/s
                                           Interval WAL: 5475 writes, 788 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:07.925596+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 286720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:08.925741+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 286720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:09.925879+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 278528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:10.926018+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 278528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:11.926148+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 270336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:12.926264+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 270336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:13.926390+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 270336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:14.926540+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 262144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:15.926703+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 262144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:16.926828+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 253952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:17.926995+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 253952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:18.927202+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 245760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:19.927343+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 245760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:20.927456+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 245760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:21.927629+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 237568 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:22.927766+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 237568 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:23.927965+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 229376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:24.928068+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 229376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:25.928202+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 229376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:26.928351+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 221184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:27.928502+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 221184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:28.928625+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 212992 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:29.928831+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 212992 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:30.928979+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 212992 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:31.929209+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 204800 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:32.929377+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 204800 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:33.929530+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:34.929611+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:35.929772+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 204800 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:36.929901+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:37.930018+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:38.930182+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:39.930355+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 188416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:40.930529+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 188416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:41.930712+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 180224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:42.930854+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 180224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:43.930993+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 172032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:44.931169+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 172032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:45.931318+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 172032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:46.931458+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 163840 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:47.931609+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 163840 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:48.931730+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 155648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:49.931948+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 155648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:50.932151+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 155648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:51.932329+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 147456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:52.932470+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 147456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:53.932605+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 147456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:54.932773+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 139264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:55.932976+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 139264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:56.933146+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 131072 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:57.933434+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 122880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:58.933591+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 114688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:59.933879+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 114688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:00.934146+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 114688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:01.934311+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 106496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:02.934491+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 106496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:03.934650+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 106496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:04.934838+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 98304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:05.934995+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 98304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:06.935164+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 90112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:07.935291+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 90112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:08.935464+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 90112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:09.935610+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 81920 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:10.935777+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 73728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:11.935982+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 73728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:12.936133+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 65536 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:13.936290+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 65536 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:14.936509+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 57344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:15.936676+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 57344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:16.936877+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 57344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:17.937038+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 49152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:18.937218+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 49152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:19.937416+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 49152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:20.937625+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 40960 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:21.938022+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 40960 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:22.938159+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 380.932373047s of 381.029602051s, submitted: 8
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 32768 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:23.938280+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [1])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:24.938442+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:25.938588+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:26.938732+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:27.938872+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:28.939030+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:29.939220+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 1875968 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:30.939361+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 1875968 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:31.939528+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 1867776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:32.939656+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 1867776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:33.939840+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 1859584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:34.939994+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 1859584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:35.940147+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 1851392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:36.940281+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 1843200 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:37.940404+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 1843200 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:38.940554+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 1835008 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:39.940724+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 1835008 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:40.940851+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 1826816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:41.941058+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 1818624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:42.941177+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 1818624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:43.941324+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 1810432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:44.941647+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 1810432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:45.941894+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 1810432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:46.942086+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 1802240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:47.942336+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 1802240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:48.942523+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 1794048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:49.942695+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 1794048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:50.942900+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 1785856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:51.943136+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 1785856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:52.943266+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 1785856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:53.943423+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 1777664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:54.943584+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 1777664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:55.943745+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 1769472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:56.943937+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 1769472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:57.944125+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 1761280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:58.944259+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 1761280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:59.944405+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 1761280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:00.944562+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 1753088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:01.944740+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 1753088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:02.944886+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 1753088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:03.945035+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 1744896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:04.945202+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 1744896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:05.945348+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:06.945504+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:07.945657+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:08.945791+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:09.945930+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:10.946080+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:11.946275+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:12.946424+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:13.946637+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:14.946860+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:15.947028+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:16.947311+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:17.947491+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:18.947642+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:19.947791+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:20.947958+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:21.948206+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:22.948351+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:23.948490+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:24.948638+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:25.948817+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1703936 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:26.948956+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1703936 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:27.949153+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1703936 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:28.949290+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1703936 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:29.949435+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:30.949576+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:31.949759+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:32.949892+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:33.950051+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:34.950274+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:35.950472+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:36.950595+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:37.950744+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:38.950847+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:39.950987+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:40.951185+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:41.951365+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:42.951496+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:43.951670+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:44.951821+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:45.952023+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:46.952169+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:47.952310+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:48.952468+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:49.952585+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:50.952700+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:51.952850+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:52.952980+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:53.953138+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:54.953311+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:55.953445+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:56.953605+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:57.953730+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:58.953866+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:59.954019+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:00.954162+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:01.954333+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:02.954486+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:03.954627+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:04.954783+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:05.954940+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:06.955090+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:07.955232+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:08.955382+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:09.955532+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:10.955664+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:11.955851+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:12.956024+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:13.956217+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:14.956373+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:15.956532+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:16.956695+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:17.956824+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:18.956955+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:19.957137+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:20.957279+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:21.957439+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:22.957691+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:23.957919+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:24.958052+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:25.958228+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:26.958498+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:27.958662+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:28.958829+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:29.958992+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:30.959178+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:31.959336+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:32.959540+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:33.959707+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:34.959843+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:35.959996+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:36.960155+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:37.960314+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:38.960464+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:39.960619+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:40.960837+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:41.961064+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:42.961266+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:43.961414+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:44.961548+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:45.961739+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:46.961915+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:47.962092+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:48.962266+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:49.962417+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:50.962546+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:51.962734+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:52.962900+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:53.963049+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:54.963185+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:55.963342+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:56.963496+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:57.963650+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:58.963796+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:59.963935+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:00.964090+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:01.964306+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:02.964444+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:03.964579+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:04.964716+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:05.964916+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:06.965114+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:07.965311+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:08.965470+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:09.965624+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:10.965803+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:11.965991+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:12.966158+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:13.966301+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:14.966438+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:15.966562+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:16.966727+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:17.966889+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:18.967085+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:19.967279+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:20.967430+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:21.967621+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:22.967774+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:23.967949+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:24.968127+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:25.968283+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:26.968428+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:27.968644+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:28.968897+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:29.969086+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:30.969341+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:31.969495+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:32.969640+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:33.969791+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:34.970037+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:35.970185+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:36.970328+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:37.970508+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:38.970711+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:39.970964+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:40.971155+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1572864 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:41.971340+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1556480 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:42.971478+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:43.971639+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:44.973564+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:45.973728+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:46.973879+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:47.974007+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:48.974184+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:49.974313+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:50.974483+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:51.974655+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:52.974789+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:53.974923+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:54.975160+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:55.975279+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:56.975543+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:57.975697+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:58.975849+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:59.975996+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:00.976162+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:01.976446+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:02.976595+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:03.976688+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:04.976873+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:05.977009+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:06.977124+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:07.977216+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:08.977364+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:09.977489+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:10.977632+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:11.977786+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:12.977929+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:13.978117+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:14.978217+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: mgrc ms_handle_reset ms_handle_reset con 0x55c0a3a34000
Dec 04 10:52:56 compute-0 ceph-osd[88205]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 10:52:56 compute-0 ceph-osd[88205]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: get_auth_request con 0x55c0a5b1a800 auth_method 0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: mgrc handle_mgr_configure stats_period=5
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:15.978373+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:16.978516+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:17.978653+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:18.978788+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:19.978927+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:20.979057+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:21.979290+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:22.979443+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:23.979649+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:24.979809+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:25.980018+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:26.980164+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:27.980286+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:28.980445+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:29.980595+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:30.980750+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:31.980947+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:32.981136+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:33.981330+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:34.981450+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:35.981587+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:36.981750+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:37.981893+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:38.982129+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:39.982318+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:40.982541+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:41.982728+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:42.982863+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:43.983011+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:44.983182+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:45.983325+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:46.983491+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:47.983628+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:48.983750+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:49.983891+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:50.984029+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:51.984160+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:52.984297+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:53.985281+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:54.985419+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:55.985545+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:56.985676+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:57.985822+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:58.985986+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:59.986144+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:00.986284+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:01.986486+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:02.986653+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:03.986783+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:04.986968+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:05.987125+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:06.987323+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:07.987464+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:08.987666+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:09.987820+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:10.987972+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:11.988159+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:12.988309+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:13.988458+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:14.988646+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:15.988823+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:16.989016+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:17.989152+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:18.989301+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:19.989459+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:20.989632+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1269760 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:21.989809+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1269760 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:22.989972+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a450f800
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.998474121s of 300.141143799s, submitted: 90
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 1024000 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:23.990166+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:24.990361+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:25.990581+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:26.990756+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:27.990916+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:29.005024+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:30.005292+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:31.005475+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:32.005678+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:33.005888+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:34.006065+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:35.006416+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:36.006680+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:37.006870+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:38.007080+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:39.007267+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:40.007449+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:41.007592+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:42.007808+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:43.007972+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:44.008167+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:45.008387+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:46.008543+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:47.008700+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:48.008866+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:49.009038+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:50.009175+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:51.009335+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:52.009584+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:53.009720+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:54.009995+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:55.010159+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:56.010360+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:57.010497+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:58.010643+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:59.010795+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:00.010955+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:01.011183+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:02.012027+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:03.012319+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:04.012692+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:05.012875+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 917504 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:06.013009+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:07.013184+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:08.013328+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:09.013512+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:10.013676+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:11.013818+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:12.014025+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:13.014259+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:14.014456+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:15.014661+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:16.014816+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:17.014981+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:18.015164+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:19.015337+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:20.015484+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:21.015641+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:22.015849+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:23.015995+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:24.016157+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:25.016357+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:26.016560+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:27.016711+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:28.016877+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:29.017068+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:30.017270+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:31.017414+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:32.017614+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:33.017747+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:34.017926+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:35.018170+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:36.018414+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:37.018628+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:38.018773+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:39.018922+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:40.019283+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:41.019472+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:42.019637+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:43.019785+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:44.019899+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:45.020038+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:46.020150+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 843776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:47.020285+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 843776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:48.020429+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:49.020633+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:50.020764+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:51.020980+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:52.021197+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:53.021346+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:54.021495+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:55.023421+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:56.023619+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:57.023799+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:58.023927+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:59.024074+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:00.024336+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:01.024551+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:02.024791+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:03.024988+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:04.025194+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:05.025386+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:06.025596+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:07.025791+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:08.026054+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:09.026196+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:10.026351+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 794624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:11.026493+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 794624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:12.026690+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:13.026961+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:14.027151+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:15.027305+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:16.027531+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:17.027702+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:18.027871+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:19.028020+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:20.028178+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:21.028318+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:22.028488+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:23.028652+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:24.028811+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:25.028944+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:26.029128+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:27.029284+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:28.029443+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:29.029640+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:30.029795+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:31.029959+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:32.030463+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:33.030641+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:34.030823+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:35.030992+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:36.031269+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 770048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:37.031417+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 770048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:38.031554+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:39.031693+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:40.031851+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:41.031983+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:42.032161+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:43.032305+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:44.032477+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:45.032668+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:46.032845+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:47.033004+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:48.033215+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:49.033363+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:50.033727+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:51.033959+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:52.034192+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:53.034386+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:54.034577+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:55.034767+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:56.034930+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:57.035074+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:58.035235+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:59.035414+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:00.035575+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:01.035730+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:02.035902+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:03.036028+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:04.036167+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:05.036343+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:06.036507+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:07.036634+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:08.036823+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:09.036969+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:10.037153+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:11.037276+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:12.037457+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:13.037612+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:14.037783+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:15.037918+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:16.038138+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:17.038308+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:18.038465+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:19.038620+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:20.038760+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.000054s
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:21.038933+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:22.039234+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:23.039417+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:24.039573+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:25.039738+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:26.039920+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:27.040147+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:28.040298+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:29.040466+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:30.040927+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:31.041076+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:32.041264+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:33.041454+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:34.041644+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:35.041803+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:36.041936+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:37.042207+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:38.042395+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:39.042550+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:40.042721+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:41.042889+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:42.043072+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:43.043252+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:44.043415+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:45.043579+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:46.043754+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:47.043962+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727248690' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:48.044292+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:49.044495+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:50.044786+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:51.044988+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:52.045209+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:53.045365+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:54.045522+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:55.045828+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:56.046225+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:57.046703+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:58.046937+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:59.047185+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:00.047328+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:01.047650+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:02.048194+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:03.048564+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:04.048783+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:05.049086+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:06.049356+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:07.049539+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5703 writes, 24K keys, 5703 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5703 writes, 902 syncs, 6.32 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:08.049757+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:09.050199+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:10.050487+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:11.423750+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:12.423941+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:13.424179+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:14.424319+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:15.424465+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:16.424624+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:17.424796+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:18.424915+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:19.425073+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:20.425275+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:21.425471+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:22.425654+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:23.425793+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:24.425938+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:25.426162+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:26.426318+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:27.426483+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:28.426665+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:29.426839+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:30.427023+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:31.427225+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:32.427405+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:33.427570+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:34.427752+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:35.427882+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:36.428074+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:37.428320+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:38.428464+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:39.428698+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:40.428854+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:41.428996+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:42.429159+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:43.429272+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:44.429399+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:45.429553+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:46.429687+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:47.429819+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:48.430585+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:49.430713+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:50.430854+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:51.431011+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:52.431234+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:53.431412+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:54.431741+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:55.431977+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:56.432162+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 638976 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:57.432365+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 638976 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:58.432526+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:59.432654+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:00.432810+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:01.432947+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:02.433174+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:03.433319+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:04.433459+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:05.433601+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:06.433761+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:07.433904+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:08.434044+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:09.434186+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:10.434340+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:11.434560+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:12.434800+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:13.435020+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:14.435184+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:15.435357+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:16.435516+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:17.435666+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:18.435836+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:19.436042+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:20.436200+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:21.436356+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:22.436535+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.694915771s of 299.933593750s, submitted: 24
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:23.436677+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 573440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:24.436828+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 221184 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:25.437304+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:26.437451+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:27.437583+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:28.437719+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:29.437849+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:30.437983+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:31.438160+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:32.438353+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:33.438486+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:34.438633+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:35.438766+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:36.438936+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:37.439089+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:38.439276+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:39.439397+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:40.439547+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:41.439736+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:42.439918+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:43.440125+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:44.440286+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:45.440520+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:46.440666+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:47.440809+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 204800 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:48.440950+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:49.441124+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:50.441287+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:51.441439+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:52.441815+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:53.441964+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:54.442180+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:55.442330+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:56.442517+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:57.442702+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:58.442854+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:59.443032+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:00.443263+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:01.443435+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:02.443620+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:03.443775+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:04.443921+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:05.444142+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:06.444281+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:07.444400+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:08.444538+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:09.444671+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:10.444828+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:11.444979+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:12.445162+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:13.445366+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:14.445528+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:15.445704+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:16.445860+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:17.446033+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:18.446298+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:19.446515+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:20.446681+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:21.446877+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:22.447089+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:23.447285+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:24.447465+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:25.447635+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:26.447803+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:27.447950+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:28.448091+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:29.448249+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:30.448467+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:31.448652+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:32.448844+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:33.449032+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:34.449289+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:35.449478+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:36.449747+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:37.450036+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:38.450267+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:39.450440+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:40.450634+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:41.450856+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:42.451042+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:43.451259+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:44.451477+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:45.451722+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:46.451932+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:47.452166+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:48.452323+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:49.452791+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:50.452970+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:51.453185+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:52.453379+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:53.453566+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:54.453757+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:55.453947+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:56.454161+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:57.454368+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:58.454532+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:59.454754+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:00.454961+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:01.455387+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:02.456011+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:03.456376+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:04.456643+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:05.457168+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:06.457644+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:07.457960+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:08.458235+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:09.458387+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:10.458564+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:11.458756+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:12.458958+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:13.459134+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:14.459286+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:15.459492+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:16.459828+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:17.460147+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:18.460753+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:19.460990+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a3fee800
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 116.999755859s of 117.139999390s, submitted: 90
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 1040384 heap: 76619776 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:20.461275+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcebe000/0x0/0x4ffc00000, data 0xab840/0x16c000, compress 0x0/0x0/0x0, omap 0x11ab8, meta 0x2bbe548), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 991232 heap: 76619776 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:21.461475+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 120 ms_handle_reset con 0x55c0a3fee800 session 0x55c0a401ec40
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:22.461720+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 9330688 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983916 data_alloc: 218103808 data_used: 3520
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a2e4e400
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x51efe8/0x5e2000, compress 0x0/0x0/0x0, omap 0x11dfd, meta 0x2bbe203), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:23.461906+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 9175040 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:24.462071+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 9134080 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 121 ms_handle_reset con 0x55c0a2e4e400 session 0x55c0a5490380
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:25.462381+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:26.462582+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:27.462839+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x520bc3/0x5e6000, compress 0x0/0x0/0x0, omap 0x11e1f, meta 0x2bbe1e1), peers [0,1] op hist [])
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:56 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:56 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988171 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:28.463200+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:56 compute-0 ceph-osd[88205]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:56 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:29.463442+0000)
Dec 04 10:52:56 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x520bc3/0x5e6000, compress 0x0/0x0/0x0, omap 0x11e1f, meta 0x2bbe1e1), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:30.463619+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:31.463799+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:32.464039+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990705 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:33.464211+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:34.464361+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:35.464511+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:36.464657+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:37.464818+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990705 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:38.464983+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:39.465156+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:40.465312+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:41.465490+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:42.465689+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.107776642s of 22.230192184s, submitted: 58
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a2e4e400
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993025 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:43.465841+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 9158656 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 10
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:44.465990+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 9142272 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:45.466168+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 9199616 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca36000/0x0/0x4ffc00000, data 0x52d5d7/0x5f6000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:46.466330+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 9199616 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca35000/0x0/0x4ffc00000, data 0x52e85e/0x5f7000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:47.466529+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 9027584 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995567 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:48.466693+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 8994816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:49.466865+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 8994816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:50.467072+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 8945664 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x53abca/0x603000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:51.467291+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 8945664 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x53abca/0x603000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:52.467527+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 8765440 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.814142227s of 10.116048813s, submitted: 35
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999813 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:53.467691+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 8650752 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 11
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:54.467863+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 8470528 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:55.468018+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 8470528 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:56.468176+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 8273920 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x558809/0x622000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:57.468436+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 8151040 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001261 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:58.471610+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 6946816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:59.471826+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 6905856 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:00.471997+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 6782976 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:01.472146+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79249408 unmapped: 6684672 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fc9e7000/0x0/0x4ffc00000, data 0x57b223/0x645000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:02.472453+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79249408 unmapped: 6684672 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.159253120s of 10.109436035s, submitted: 78
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006011 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:03.472714+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 6619136 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9e5000/0x0/0x4ffc00000, data 0x57def9/0x647000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:04.472870+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 5390336 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:05.473084+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 5390336 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:06.473332+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 5447680 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:07.473532+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 5447680 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9d2000/0x0/0x4ffc00000, data 0x590256/0x65a000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008761 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:08.473707+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 5210112 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:09.473888+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 5210112 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9c8000/0x0/0x4ffc00000, data 0x59a3b3/0x664000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:10.474031+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 5193728 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:11.474176+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 5193728 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:12.474385+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009539 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:13.474538+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:14.474690+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9bc000/0x0/0x4ffc00000, data 0x5a6634/0x670000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:15.474863+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.569020271s of 12.741366386s, submitted: 41
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:16.475088+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:17.475398+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 5365760 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9b3000/0x0/0x4ffc00000, data 0x5af14c/0x679000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006899 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:18.475684+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 5300224 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:19.475870+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 5292032 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:20.476196+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 5251072 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:21.476365+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:22.476551+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011691 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:23.476732+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc996000/0x0/0x4ffc00000, data 0x5cac75/0x696000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:24.476871+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 5021696 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:25.477063+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.882642746s of 10.000583649s, submitted: 38
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 2809856 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:26.477174+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 1728512 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:27.477366+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1630208 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014373 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:28.477524+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1556480 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:29.477673+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1417216 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fb7ca000/0x0/0x4ffc00000, data 0x5f65bb/0x6c2000, compress 0x0/0x0/0x0, omap 0x11f29, meta 0x3d5e0d7), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:30.477848+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1245184 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:31.478051+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1245184 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:32.478293+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 1056768 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017817 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:33.478463+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 958464 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:34.478630+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 958464 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:35.478761+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.733018875s of 10.001555443s, submitted: 91
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 950272 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb7a8000/0x0/0x4ffc00000, data 0x61635f/0x6e2000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:36.478930+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 917504 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:37.479150+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 917504 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019529 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:38.479362+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 1957888 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb78d000/0x0/0x4ffc00000, data 0x633c4e/0x6ff000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:39.479565+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1949696 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:40.479786+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1949696 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:41.479968+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb77b000/0x0/0x4ffc00000, data 0x64574b/0x711000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 1826816 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb77b000/0x0/0x4ffc00000, data 0x64574b/0x711000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:42.480283+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb778000/0x0/0x4ffc00000, data 0x6480c0/0x714000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1802240 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023333 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:43.480432+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1728512 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:44.480587+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1728512 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:45.480827+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848536491s of 10.001356125s, submitted: 29
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1703936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:46.480966+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1703936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb763000/0x0/0x4ffc00000, data 0x65cfdb/0x729000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:47.481174+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 1867776 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022057 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:48.481370+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 1810432 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:49.481569+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 1810432 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:50.481748+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1744896 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:51.481892+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a5b1a400
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 401408 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:52.482086+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb72d000/0x0/0x4ffc00000, data 0x68e752/0x75f000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 999424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:53.482328+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041309 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 12
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 983040 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:54.482487+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 1196032 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:55.482661+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.589168549s of 10.002529144s, submitted: 57
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a450e000
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb703000/0x0/0x4ffc00000, data 0x6ba903/0x789000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 1105920 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:56.482846+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 1073152 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:57.483061+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 1015808 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:58.483326+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038997 data_alloc: 218103808 data_used: 4260
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb6da000/0x0/0x4ffc00000, data 0x6e2609/0x7b2000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 1318912 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:59.483575+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 1318912 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb6bc000/0x0/0x4ffc00000, data 0x6ffebe/0x7d0000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:00.483799+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 1179648 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:01.483964+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 2170880 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:02.484331+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 884736 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fb669000/0x0/0x4ffc00000, data 0x74b4c0/0x81f000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:03.484511+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059855 data_alloc: 218103808 data_used: 4260
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 679936 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:04.484751+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 663552 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:05.484995+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.600092888s of 10.000102997s, submitted: 172
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 1277952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:06.485161+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90284032 unmapped: 892928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:07.485327+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89513984 unmapped: 1662976 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:08.485546+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063631 data_alloc: 218103808 data_used: 4105
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1630208 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb622000/0x0/0x4ffc00000, data 0x7907b7/0x866000, compress 0x0/0x0/0x0, omap 0x12520, meta 0x3d5dae0), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:09.485706+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90038272 unmapped: 1138688 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:10.485834+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fb5e9000/0x0/0x4ffc00000, data 0x7ca62d/0x8a1000, compress 0x0/0x0/0x0, omap 0x12680, meta 0x3d5d980), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90152960 unmapped: 2072576 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:11.485976+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90251264 unmapped: 1974272 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:12.486195+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 1753088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:13.486361+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075351 data_alloc: 218103808 data_used: 4755
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 1753088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:14.486575+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90054656 unmapped: 2170880 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:15.486738+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.681773186s of 10.032649994s, submitted: 160
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91299840 unmapped: 925696 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:16.486889+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b5000/0x0/0x4ffc00000, data 0x7fe7a5/0x8d7000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:17.487032+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x802d67/0x8db000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 860160 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:18.487187+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076485 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:19.487370+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:20.487533+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x803215/0x8db000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:21.487716+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:22.487961+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:23.488225+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x8109fa/0x8e9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075757 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x8109fa/0x8e9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:24.488387+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 491520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:25.488536+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.903874397s of 10.266777992s, submitted: 19
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 483328 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:26.488673+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 442368 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:27.488822+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91930624 unmapped: 294912 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb580000/0x0/0x4ffc00000, data 0x833caa/0x90c000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:28.489026+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078121 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:29.489222+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:30.489357+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91504640 unmapped: 1769472 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:31.489539+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 1736704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:32.489779+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 1736704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:33.489960+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079017 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 1728512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:34.490191+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 1728512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:35.490350+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb546000/0x0/0x4ffc00000, data 0x86d830/0x946000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:36.490591+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:37.490736+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:38.490945+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080545 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:39.491168+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb546000/0x0/0x4ffc00000, data 0x86d830/0x946000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:40.491333+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:41.491517+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:42.491717+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.004943848s of 16.979648590s, submitted: 22
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 2056192 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:43.491852+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb539000/0x0/0x4ffc00000, data 0x87b015/0x953000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081409 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91455488 unmapped: 1818624 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:44.492028+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91455488 unmapped: 1818624 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:45.492183+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:46.492317+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:47.492483+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:48.492635+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082297 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:49.492763+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb510000/0x0/0x4ffc00000, data 0x8a3c6f/0x97c000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:50.492922+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92340224 unmapped: 933888 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4f3000/0x0/0x4ffc00000, data 0x8c1190/0x999000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:51.493070+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:52.493266+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:53.493481+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4d8000/0x0/0x4ffc00000, data 0x8db7f0/0x9b4000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084117 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.924418449s of 11.061837196s, submitted: 25
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:54.493634+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:55.493777+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92512256 unmapped: 1810432 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:56.493930+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92512256 unmapped: 1810432 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:57.494073+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 ms_handle_reset con 0x55c0a5b1a400 session 0x55c0a5b048c0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 2277376 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:58.494307+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 ms_handle_reset con 0x55c0a450e000 session 0x55c0a5f96700
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085581 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4c4000/0x0/0x4ffc00000, data 0x8ee753/0x9c8000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 2277376 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 13
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:59.494455+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4c4000/0x0/0x4ffc00000, data 0x8ee8b9/0x9c8000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:00.494685+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:01.494927+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:02.495166+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb492000/0x0/0x4ffc00000, data 0x9215f8/0x9fa000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb493000/0x0/0x4ffc00000, data 0x92155d/0x9f9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:03.495505+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090467 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.560784340s of 10.244839668s, submitted: 209
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 2424832 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:04.495689+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 2424832 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:05.495886+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 2392064 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:06.496067+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:07.496213+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:08.496473+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089339 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:09.496711+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:10.496920+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:11.497088+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:12.497285+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:13.497495+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087675 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:14.497742+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:15.497912+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:16.498055+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:17.498256+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:18.498471+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.419944763s of 14.577485085s, submitted: 11
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087819 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:19.498615+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:20.498771+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:21.498968+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:22.499291+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:23.499433+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089351 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a701/0xa13000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:24.499588+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:25.499815+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:26.499993+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:27.500199+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:28.500385+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089207 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.268507957s of 10.285860062s, submitted: 5
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a701/0xa13000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:29.500599+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:30.500773+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:31.500969+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:32.501235+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:33.501399+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088649 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:34.501543+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:35.501694+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:36.501882+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:37.502054+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:38.502267+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090309 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.970705986s of 10.010634422s, submitted: 8
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:39.502417+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:40.502590+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:41.502904+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb475000/0x0/0x4ffc00000, data 0x93c26b/0xa15000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:42.503092+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb475000/0x0/0x4ffc00000, data 0x93c26b/0xa15000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:43.503285+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092127 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 2269184 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:44.503471+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 2269184 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:45.503675+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:46.503890+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:47.504093+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:48.504274+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094885 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:49.504433+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:50.504639+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.480938911s of 11.537956238s, submitted: 43
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:51.504846+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:52.505084+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:53.505310+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095029 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:54.505517+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:55.505679+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:56.505845+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93de27/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:57.506029+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93de27/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:58.506205+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097421 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:59.506331+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:00.506478+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.449153900s of 10.473722458s, submitted: 15
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:01.506639+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:02.506841+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:03.506976+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096815 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:04.507188+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:05.507319+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:06.507540+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:07.507724+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddf9/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:08.508005+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddf9/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098363 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93216768 unmapped: 2154496 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:09.508279+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93216768 unmapped: 2154496 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:10.508484+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.951936722s of 10.007729530s, submitted: 8
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 2146304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:11.508701+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93ddb5/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 2146304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:12.508907+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:13.509087+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:14.509304+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:15.509549+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:16.509735+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:17.509878+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:18.510073+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:19.510238+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:20.510392+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:21.510558+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.995441437s of 11.008138657s, submitted: 5
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:22.510821+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb474000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:23.510979+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb474000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:24.511158+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:25.511354+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:26.511553+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:27.511710+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:28.511847+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097645 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:29.512013+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:30.512225+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:31.512379+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:32.512612+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:33.512808+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097789 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:34.513000+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.616669655s of 12.638894081s, submitted: 13
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:35.513199+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:36.513415+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:37.513628+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:38.513858+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 2351104 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097805 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:39.514040+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 2318336 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:40.514202+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 2318336 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:41.514330+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:42.514524+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:43.514667+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097645 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:44.514845+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:45.514997+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.879154205s of 10.907876015s, submitted: 15
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:46.515197+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 2236416 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:47.515325+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:48.515473+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099337 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:49.516217+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:50.516368+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93dde1/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:51.516517+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:52.516696+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:53.516837+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddb5/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099177 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:54.517009+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:55.517179+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:56.517315+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:57.517456+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.439765930s of 12.481030464s, submitted: 20
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:58.517554+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098603 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:59.517687+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:00.517802+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:01.517939+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:02.518158+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:03.518336+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103199 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:04.518527+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 2179072 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fb46e000/0x0/0x4ffc00000, data 0x93f98a/0xa1c000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:05.518711+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 2179072 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:06.518913+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 1122304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:07.519032+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:08.519297+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104299 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:09.519538+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fb46f000/0x0/0x4ffc00000, data 0x93fa25/0xa1d000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.015718460s of 12.077057838s, submitted: 32
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:10.519863+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:11.519993+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:12.520185+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:13.520318+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109053 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:14.520454+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb469000/0x0/0x4ffc00000, data 0x94153f/0xa21000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:15.520596+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:16.520744+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:17.520893+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:18.521048+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109881 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:19.521197+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46a000/0x0/0x4ffc00000, data 0x9415da/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:20.521334+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.393504143s of 10.408122063s, submitted: 19
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:21.521508+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb469000/0x0/0x4ffc00000, data 0x941608/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:22.521849+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:23.522164+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112547 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:24.522555+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb468000/0x0/0x4ffc00000, data 0x9416a3/0xa23000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:25.522825+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:26.522980+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:27.523181+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:28.523436+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111653 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:29.523630+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46a000/0x0/0x4ffc00000, data 0x941608/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:30.523841+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:31.524045+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508604050s of 11.536386490s, submitted: 13
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:32.524285+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:33.524469+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112619 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:34.524658+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1064960 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46b000/0x0/0x4ffc00000, data 0x94153f/0xa21000, compress 0x0/0x0/0x0, omap 0x12ca4, meta 0x3d5d35c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:35.524830+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94322688 unmapped: 1048576 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:36.524975+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94330880 unmapped: 1040384 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:37.525156+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 2072576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:38.525315+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 2072576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121177 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:39.525508+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 2031616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:40.525641+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb445000/0x0/0x4ffc00000, data 0x964870/0xa47000, compress 0x0/0x0/0x0, omap 0x12ca4, meta 0x3d5d35c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95461376 unmapped: 958464 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:41.525883+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 745472 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.857433319s of 10.014651299s, submitted: 97
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:42.526067+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:43.526211+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 1941504 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135499 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:44.526417+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95584256 unmapped: 1884160 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3c4000/0x0/0x4ffc00000, data 0x9e0112/0xac6000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:45.526639+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 1875968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x9eacc4/0xad1000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:46.526880+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 1867776 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:47.527044+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3ac000/0x0/0x4ffc00000, data 0x9f885a/0xade000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 991232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:48.527294+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 991232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134715 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:49.527542+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 827392 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:50.527700+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96616448 unmapped: 851968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:51.527886+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96616448 unmapped: 851968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb366000/0x0/0x4ffc00000, data 0xa3fab4/0xb26000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:52.528116+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 655360 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:53.528301+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 655360 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139427 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:54.528446+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.366982460s of 12.445398331s, submitted: 44
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97017856 unmapped: 1499136 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb366000/0x0/0x4ffc00000, data 0xa3fab4/0xb26000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:55.528564+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1466368 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:56.528728+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1466368 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:57.528869+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1425408 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:58.529056+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149237 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:59.529254+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb2d4000/0x0/0x4ffc00000, data 0xacf83f/0xbb8000, compress 0x0/0x0/0x0, omap 0x12d7b, meta 0x3d5d285), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:00.529397+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:01.529544+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97460224 unmapped: 3153920 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:02.529760+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fb29d000/0x0/0x4ffc00000, data 0xb04a92/0xbed000, compress 0x0/0x0/0x0, omap 0x12d7b, meta 0x3d5d285), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 3178496 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:03.529962+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 3178496 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157693 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:04.530142+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97615872 unmapped: 2998272 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.291193008s of 10.482179642s, submitted: 116
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:05.530278+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 2637824 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:06.530430+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb217000/0x0/0x4ffc00000, data 0xb8911b/0xc73000, compress 0x0/0x0/0x0, omap 0x12dfc, meta 0x3d5d204), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 98099200 unmapped: 2514944 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:07.530573+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 3121152 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:08.530713+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1851392 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174773 data_alloc: 218103808 data_used: 5091
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:09.530847+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 1802240 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:10.530978+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb19c000/0x0/0x4ffc00000, data 0xc00bf3/0xcee000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99565568 unmapped: 2097152 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:11.531137+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99672064 unmapped: 1990656 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:12.531266+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb16d000/0x0/0x4ffc00000, data 0xc32018/0xd1d000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1843200 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:13.531424+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 2408448 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb147000/0x0/0x4ffc00000, data 0xc594a7/0xd45000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:14.531565+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177381 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 101490688 unmapped: 2269184 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.723609924s of 10.026507378s, submitted: 124
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:15.531724+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 1171456 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb132000/0x0/0x4ffc00000, data 0xc6f06d/0xd59000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:16.531857+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:17.532026+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:18.533382+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:19.533562+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188495 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:20.533704+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:21.533921+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb09d000/0x0/0x4ffc00000, data 0xd007a6/0xdec000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:22.534135+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb09d000/0x0/0x4ffc00000, data 0xd007a6/0xdec000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:23.534281+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 1785856 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:24.534419+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192129 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb067000/0x0/0x4ffc00000, data 0xd37a62/0xe24000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 1785856 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:25.534593+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 1638400 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:26.535638+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.634295464s of 11.785771370s, submitted: 94
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 2170880 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:27.536016+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 2170880 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:28.536349+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:29.536649+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190545 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:30.536853+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb04c000/0x0/0x4ffc00000, data 0xd54161/0xe40000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 151 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:31.537470+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:32.537668+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:33.538061+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:34.538419+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190921 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:35.538855+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:36.539036+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.272990227s of 10.301798820s, submitted: 29
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:37.539275+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:38.539488+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:39.539683+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192613 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:40.539847+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:41.540141+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:42.540370+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:43.540615+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:44.540748+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194161 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:45.541070+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c70/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:46.541279+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:47.541451+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:48.541689+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:49.541880+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195709 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.117662430s of 13.127370834s, submitted: 5
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:50.542054+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:51.542273+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb045000/0x0/0x4ffc00000, data 0xd55e6c/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:52.542615+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:53.542774+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:54.543018+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd55e6e/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199109 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd55e6e/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:55.543231+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:56.543406+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:57.543628+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:58.543833+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:59.543972+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198087 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb045000/0x0/0x4ffc00000, data 0xd55dd4/0xe46000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:00.544090+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.867134094s of 10.890979767s, submitted: 14
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:01.544291+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:02.544425+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:03.544635+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0xd55d37/0xe45000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:04.544767+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197035 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:05.544861+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:06.545040+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:07.545190+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8989 writes, 34K keys, 8989 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8989 writes, 2320 syncs, 3.87 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3286 writes, 10K keys, 3286 commit groups, 1.0 writes per commit group, ingest: 13.71 MB, 0.02 MB/s
                                           Interval WAL: 3286 writes, 1418 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9d/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:08.545355+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:09.545514+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197419 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55c00/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:10.546359+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:11.546507+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.055023193s of 11.093473434s, submitted: 18
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 1187840 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:12.546651+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 1187840 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:13.546771+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:14.546907+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196669 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:15.547073+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:16.547242+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:17.547374+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:18.547548+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:19.547709+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196669 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:20.547848+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:21.547944+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:22.548045+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:23.548176+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:24.548273+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:25.548423+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:26.548703+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.888220787s of 14.927642822s, submitted: 4
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:27.548885+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:28.549034+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:29.549185+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:30.549359+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:31.550379+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:32.552038+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:33.552366+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:34.553643+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a3fef800
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198361 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:35.555036+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 14
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 1105920 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55c4c/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:36.556416+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.001877785s of 10.011025429s, submitted: 5
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:37.556879+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:38.558543+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:39.559483+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197819 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:40.559948+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:41.560247+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:42.560445+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:43.560637+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:44.560874+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197835 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:45.561027+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:46.561225+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.001707077s of 10.005904198s, submitted: 3
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:47.561464+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:48.561720+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:49.561882+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196685 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:50.562073+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:51.562266+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:52.562497+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:53.562730+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:54.562870+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:55.563031+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:56.563271+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.169968605s of 10.174468994s, submitted: 2
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:57.563423+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:58.563632+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:59.563813+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:00.563999+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:01.564250+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:02.564559+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:03.564697+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:04.564859+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196685 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:05.565047+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:06.565243+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:07.565437+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:08.565638+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:09.565795+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:10.565960+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:11.566172+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:12.566370+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:13.566558+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:14.566767+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.824216843s of 17.850557327s, submitted: 6
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:15.566922+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:16.567080+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:17.567261+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:18.567410+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:19.567547+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198361 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:20.567698+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9b/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:21.567882+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:22.568230+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:23.568394+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 1531904 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:24.568543+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103325696 unmapped: 1482752 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199031 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.812626839s of 10.004839897s, submitted: 95
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:25.568680+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 2637824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:26.568883+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:27.569072+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:28.569310+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:29.569522+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198585 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:30.569799+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:31.570004+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:32.570314+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:33.570547+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0xd55c9d/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:34.570747+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 2621440 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201969 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:35.570916+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.108821869s of 10.326163292s, submitted: 42
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103243776 unmapped: 2613248 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9b/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:36.571166+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd578a0/0xe47000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:37.571350+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:38.571517+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd57808/0xe46000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:39.571725+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205847 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd57808/0xe46000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:40.571925+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:41.572172+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:42.572351+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:43.572528+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:44.572666+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208173 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:45.572856+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb041000/0x0/0x4ffc00000, data 0xd59285/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.619541168s of 10.691827774s, submitted: 44
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:46.573155+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:47.573322+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:48.573496+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:49.573690+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207727 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:50.574408+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:51.574617+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:52.574819+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:53.574977+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:54.575187+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208843 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:55.575387+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:56.575628+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884953499s of 11.015766144s, submitted: 6
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:57.575908+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 154 ms_handle_reset con 0x55c0a3fef800 session 0x55c0a3818380
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:58.576186+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 2375680 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:59.576333+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 2375680 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 15
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd59259/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208555 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:00.576527+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 2293760 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd59259/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:01.576665+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:02.576870+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:03.577055+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5ae5e/0xe4c000, compress 0x0/0x0/0x0, omap 0x18dca, meta 0x3d57236), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:04.577212+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215781 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:05.577383+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:06.577543+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03b000/0x0/0x4ffc00000, data 0xd5c8dd/0xe4f000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:07.577742+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03b000/0x0/0x4ffc00000, data 0xd5c8dd/0xe4f000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:08.577970+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:09.578214+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.581476212s of 12.955293655s, submitted: 224
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216753 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:10.578364+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:11.578531+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:12.578697+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:13.578870+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:14.579056+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214487 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:15.579228+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:16.579377+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:17.579503+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:18.579694+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:19.579848+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217965 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:20.579983+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.289826393s of 10.338050842s, submitted: 31
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:21.580167+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:22.580357+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:23.580694+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:24.580826+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222287 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb03a000/0x0/0x4ffc00000, data 0xd5e4e2/0xe52000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:25.581020+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd5ff61/0xe55000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:26.581249+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:27.581382+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:28.581543+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:29.581724+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223979 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:30.581858+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd5fffc/0xe56000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd5fffc/0xe56000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:31.582016+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.204831123s of 11.530242920s, submitted: 36
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:32.582273+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:33.582437+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd60097/0xe57000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:34.582611+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225925 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:35.582752+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:36.582924+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:37.583056+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:38.583255+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd61b66/0xe58000, compress 0x0/0x0/0x0, omap 0x198f2, meta 0x3d5670e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:39.583456+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226641 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:40.583787+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd61acb/0xe57000, compress 0x0/0x0/0x0, omap 0x198f2, meta 0x3d5670e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:41.584048+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:42.584315+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:43.584599+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:44.584793+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226641 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.297651291s of 13.383323669s, submitted: 59
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:45.584962+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:46.585187+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:47.585291+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:48.585490+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:49.585732+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:50.585951+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:51.586289+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:52.586715+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:53.586962+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:54.587323+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:55.587539+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:56.587845+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:57.588008+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:58.588171+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:59.588367+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:00.588546+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:01.588770+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:02.589006+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:03.589189+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:04.589396+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:05.589640+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:06.589960+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:07.590198+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:08.590405+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:09.590610+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:10.590790+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:11.590945+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:12.591196+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:13.591367+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:14.591525+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:15.591651+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:16.591792+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:17.591926+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:18.592089+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:19.592278+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:20.592510+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:21.592648+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:22.592812+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:23.592982+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:24.593149+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.247886658s of 39.155723572s, submitted: 13
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230119 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:25.593292+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:26.593477+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:27.593664+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:28.593855+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:29.594050+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230119 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:30.594270+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:31.594446+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:32.594651+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:33.594798+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:34.594950+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:35.595197+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229415 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:36.595463+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb032000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.258265495s of 12.265155792s, submitted: 3
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:37.595593+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02d000/0x0/0x4ffc00000, data 0xd6514f/0xe5d000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:38.595733+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:39.595866+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:40.595987+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234729 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:41.596168+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:42.596357+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:43.596517+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02e000/0x0/0x4ffc00000, data 0xd651ea/0xe5e000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02e000/0x0/0x4ffc00000, data 0xd651ea/0xe5e000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:44.596951+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:45.597208+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236625 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:46.597357+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:47.597575+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:48.597798+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:49.598030+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02a000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:50.598247+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236625 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:51.598428+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02a000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:52.598635+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.053594589s of 16.302835464s, submitted: 44
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a450f000
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103653376 unmapped: 2203648 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:53.598826+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 16
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a3655400
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 2072576 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:54.599212+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb029000/0x0/0x4ffc00000, data 0xd66ee7/0xe63000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 2072576 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:55.599571+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240981 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 17
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103792640 unmapped: 2064384 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:56.599770+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103792640 unmapped: 2064384 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:57.599988+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:58.600179+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:59.600395+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:00.600544+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:01.600701+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:02.600895+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:03.601159+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:04.601410+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:05.601571+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:06.601749+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:07.601929+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:08.602078+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:09.602261+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:10.602428+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:11.602581+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:12.602790+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:13.603062+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:14.603289+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:15.603499+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:16.603625+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:17.603754+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:18.603911+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:19.604090+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:20.604259+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:21.604404+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:22.604566+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:23.604705+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:24.604846+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:25.604977+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:26.605137+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:27.605303+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:28.605427+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:29.605548+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:30.605913+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:31.606074+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:32.606314+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:33.606500+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:34.606745+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:35.606901+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:36.607062+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:37.607229+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:38.607378+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:39.607574+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:40.607700+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:41.607824+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:42.608020+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:43.608176+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:44.608301+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:45.608440+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:46.608592+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:47.608724+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:48.608971+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:49.609152+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:50.609372+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:51.609525+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:52.609774+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:53.609955+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:54.610262+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 61.749771118s of 62.368705750s, submitted: 11
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:55.610698+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238939 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:56.610964+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:57.611400+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 ms_handle_reset con 0x55c0a450f000 session 0x55c0a3694a80
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:58.611705+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 ms_handle_reset con 0x55c0a3655400 session 0x55c0a6381500
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104071168 unmapped: 1785856 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:59.612012+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 18
Dec 04 10:52:57 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:00.612236+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238635 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:01.612484+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:02.612818+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:03.613078+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:04.613294+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:05.613521+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238779 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:06.613729+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.142169952s of 11.714550018s, submitted: 184
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:07.614024+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:08.614297+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:09.614597+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:10.614811+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242433 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:11.615021+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:12.721700+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:13.721981+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:14.722142+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:15.722331+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244903 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:16.722549+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:17.722755+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:18.722909+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:19.723048+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:20.723173+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244903 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:21.723283+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:22.723458+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:23.723666+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:24.723795+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.232194901s of 18.285558701s, submitted: 52
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:25.723912+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245047 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:26.724056+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:27.724211+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:28.724439+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:29.724619+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:30.724817+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245047 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:31.724982+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:32.725182+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:33.725393+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:34.725628+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:35.725841+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244343 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:36.726158+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:37.726343+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:38.726517+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:39.726712+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.992785454s of 15.000616074s, submitted: 4
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:40.726925+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244487 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:41.727078+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:42.727317+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:43.727593+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:44.727790+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:45.727935+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244471 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:46.728182+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb00b000/0x0/0x4ffc00000, data 0xd84fc5/0xe81000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:47.728346+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:48.728625+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:49.728815+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fafe9000/0x0/0x4ffc00000, data 0xda606f/0xea3000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.500107765s of 10.000913620s, submitted: 13
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:50.728984+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 1679360 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252421 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:51.729226+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104243200 unmapped: 1613824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:52.729560+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104243200 unmapped: 1613824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:53.729806+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 1417216 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fafe2000/0x0/0x4ffc00000, data 0xdacba7/0xeaa000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:54.729943+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 1417216 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:55.730114+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104448000 unmapped: 1409024 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259093 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:56.730288+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 1335296 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:57.730508+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:58.730693+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:59.730925+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faf67000/0x0/0x4ffc00000, data 0xe28f92/0xf25000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.939127922s of 10.002140999s, submitted: 20
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:00.731131+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105013248 unmapped: 1892352 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faf67000/0x0/0x4ffc00000, data 0xe28f92/0xf25000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255505 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:01.731257+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 1859584 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:02.731440+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 1859584 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:03.731586+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 1794048 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:04.731755+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 1794048 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:05.731934+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 2793472 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260241 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:06.732180+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xea172e/0xf9e000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 2531328 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:07.732334+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 2531328 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:08.732567+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 2310144 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:09.732756+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 2310144 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.055247307s of 10.003301620s, submitted: 24
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:10.732913+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 2891776 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264103 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:11.733067+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:12.733321+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 165 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0xee3272/0xfe1000, compress 0x0/0x0/0x0, omap 0x1a9e1, meta 0x3d5561f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 165 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0xee3272/0xfe1000, compress 0x0/0x0/0x0, omap 0x1a9e1, meta 0x3d5561f), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:13.733476+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:14.733633+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:15.733814+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 2727936 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:16.733962+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 2695168 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:17.734188+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 2695168 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:18.734325+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:19.734457+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:20.734606+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:21.734758+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:22.734928+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:23.735089+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:24.735293+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:25.735680+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:26.735989+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:27.736270+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:28.736519+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:29.736665+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:30.736987+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:31.737182+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:32.737357+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:33.737500+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:34.737679+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:35.737849+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:36.738016+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:37.738203+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:38.738360+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:39.738505+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:40.738674+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:41.738887+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:42.739147+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:43.739318+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:44.739494+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:45.739698+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:46.739913+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:47.740053+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:48.740204+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:49.740360+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:50.740567+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:51.740712+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:52.740930+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:53.741134+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:54.741303+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:55.741452+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:56.741629+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:57.741787+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:58.741982+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:59.742146+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:00.742292+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:01.742422+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:02.742590+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:03.742727+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:04.742872+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:05.743010+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:06.743142+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:07.743294+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:08.743471+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:09.743658+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:10.743836+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:11.744017+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:12.744318+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:13.744466+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:14.744600+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:15.744795+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:16.744959+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:17.745087+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:18.745228+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:19.745360+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:20.745482+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:21.745629+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:22.745826+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:23.745960+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}'
Dec 04 10:52:57 compute-0 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 04 10:52:57 compute-0 ceph-osd[88205]: do_command 'config show' '{prefix=config show}'
Dec 04 10:52:57 compute-0 ceph-osd[88205]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105717760 unmapped: 2236416 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}'
Dec 04 10:52:57 compute-0 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 04 10:52:57 compute-0 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}'
Dec 04 10:52:57 compute-0 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:24.746135+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 3325952 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:25.746285+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 3301376 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:52:57 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:52:57 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 10:52:57 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:26.746427+0000)
Dec 04 10:52:57 compute-0 ceph-osd[88205]: do_command 'log dump' '{prefix=log dump}'
Dec 04 10:52:57 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:52:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14584 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec 04 10:52:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:52:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 04 10:52:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183430560' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 04 10:52:57 compute-0 ceph-mon[75358]: from='client.14572 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:57 compute-0 ceph-mon[75358]: pgmap v1311: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:57 compute-0 ceph-mon[75358]: from='client.14578 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:57 compute-0 ceph-mon[75358]: from='client.14580 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:57 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1727248690' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 04 10:52:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:52:57 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2183430560' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 04 10:52:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14588 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec 04 10:52:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:52:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 04 10:52:58 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2895689020' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 04 10:52:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:52:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8413a03400>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8435ce5940>)]
Dec 04 10:52:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:52:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:52:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:52:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:52:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 04 10:52:58 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3877293756' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 04 10:52:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14596 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:58 compute-0 ceph-mon[75358]: from='client.14584 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:58 compute-0 ceph-mon[75358]: from='client.14588 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:52:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2895689020' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 04 10:52:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3877293756' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 04 10:52:58 compute-0 podman[263093]: 2025-12-04 10:52:58.952118649 +0000 UTC m=+0.054091893 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 04 10:52:58 compute-0 podman[263092]: 2025-12-04 10:52:58.974384297 +0000 UTC m=+0.079789635 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible)
Dec 04 10:52:59 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 04 10:52:59 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586584895' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 04 10:52:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:52:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f84183ed160>)]
Dec 04 10:52:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 04 10:52:59 compute-0 nova_compute[244644]: 2025-12-04 10:52:59.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:52:59 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14602 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:52:59 compute-0 ceph-mon[75358]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:59 compute-0 ceph-mon[75358]: pgmap v1312: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:52:59 compute-0 ceph-mon[75358]: from='client.14596 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:52:59 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/586584895' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 04 10:52:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 04 10:52:59 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340669965' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 04 10:52:59 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.iwufnj(active, since 38m)
Dec 04 10:53:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14606 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 04 10:53:00 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268822331' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 04 10:53:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14610 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14614 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:00 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 04 10:53:00 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:53:00.942+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 04 10:53:00 compute-0 ceph-mon[75358]: from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:00 compute-0 ceph-mon[75358]: from='client.14602 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:00 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2340669965' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 04 10:53:00 compute-0 ceph-mon[75358]: mgrmap e19: compute-0.iwufnj(active, since 38m)
Dec 04 10:53:00 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/268822331' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 04 10:53:01 compute-0 crontab[263414]: (root) LIST (root)
Dec 04 10:53:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 04 10:53:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062305256' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1327104 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:29.261243+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1318912 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:30.261370+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1318912 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933851 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:31.261579+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:00.380226+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.7 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:00.390783+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.7 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1318912 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 183)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:00.380226+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.7 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:00.390783+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.7 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:32.261812+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:01.368237+0000 osd.1 (osd.1) 184 : cluster [DBG] 8.19 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:01.382382+0000 osd.1 (osd.1) 185 : cluster [DBG] 8.19 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 1294336 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 185)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:01.368237+0000 osd.1 (osd.1) 184 : cluster [DBG] 8.19 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:01.382382+0000 osd.1 (osd.1) 185 : cluster [DBG] 8.19 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:33.262044+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 1294336 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.928408623s of 10.974489212s, submitted: 10
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:34.262194+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:03.425767+0000 osd.1 (osd.1) 186 : cluster [DBG] 11.1d scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:03.436721+0000 osd.1 (osd.1) 187 : cluster [DBG] 11.1d scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1277952 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 187)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:03.425767+0000 osd.1 (osd.1) 186 : cluster [DBG] 11.1d scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:03.436721+0000 osd.1 (osd.1) 187 : cluster [DBG] 11.1d scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:35.262417+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:04.456786+0000 osd.1 (osd.1) 188 : cluster [DBG] 8.1e scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:04.467391+0000 osd.1 (osd.1) 189 : cluster [DBG] 8.1e scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1277952 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943505 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 189)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:04.456786+0000 osd.1 (osd.1) 188 : cluster [DBG] 8.1e scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:04.467391+0000 osd.1 (osd.1) 189 : cluster [DBG] 8.1e scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:36.262612+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:05.448800+0000 osd.1 (osd.1) 190 : cluster [DBG] 8.13 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:05.459395+0000 osd.1 (osd.1) 191 : cluster [DBG] 8.13 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 1261568 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 191)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:05.448800+0000 osd.1 (osd.1) 190 : cluster [DBG] 8.13 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:05.459395+0000 osd.1 (osd.1) 191 : cluster [DBG] 8.13 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:37.262840+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1253376 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:38.262969+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1253376 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:39.263138+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:08.450736+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.f scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:08.461332+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.f scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 1245184 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 193)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:08.450736+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.f scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:08.461332+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.f scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:40.263333+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 1245184 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948331 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:41.263464+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:10.408633+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.b scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:10.419153+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.b scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1236992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 195)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:10.408633+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.b scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:10.419153+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.b scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:42.263663+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:11.383750+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.2 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:11.394320+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.2 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1236992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 197)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:11.383750+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.2 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:11.394320+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.2 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:43.263853+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1228800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:44.263986+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:13.401025+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.6 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:13.411460+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.6 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1228800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 199)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:13.401025+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.6 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:13.411460+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.6 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:45.264250+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1228800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955570 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:46.264372+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1228800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:47.264510+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 1220608 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:48.264659+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 1220608 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:49.264805+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1212416 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:50.264936+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1212416 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955570 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:51.265071+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.781251907s of 17.896800995s, submitted: 14
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1204224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:52.265167+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:21.322519+0000 osd.1 (osd.1) 200 : cluster [DBG] 10.19 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:21.333125+0000 osd.1 (osd.1) 201 : cluster [DBG] 10.19 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1204224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 201)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:21.322519+0000 osd.1 (osd.1) 200 : cluster [DBG] 10.19 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:21.333125+0000 osd.1 (osd.1) 201 : cluster [DBG] 10.19 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:53.265397+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:22.315259+0000 osd.1 (osd.1) 202 : cluster [DBG] 10.1a scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:22.325839+0000 osd.1 (osd.1) 203 : cluster [DBG] 10.1a scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1204224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 203)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:22.315259+0000 osd.1 (osd.1) 202 : cluster [DBG] 10.1a scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:22.325839+0000 osd.1 (osd.1) 203 : cluster [DBG] 10.1a scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:54.265583+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1196032 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:55.265708+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:24.278149+0000 osd.1 (osd.1) 204 : cluster [DBG] 10.11 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:24.288644+0000 osd.1 (osd.1) 205 : cluster [DBG] 10.11 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1196032 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965230 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 205)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:24.278149+0000 osd.1 (osd.1) 204 : cluster [DBG] 10.11 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:24.288644+0000 osd.1 (osd.1) 205 : cluster [DBG] 10.11 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:56.265943+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:25.311390+0000 osd.1 (osd.1) 206 : cluster [DBG] 10.13 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:25.321967+0000 osd.1 (osd.1) 207 : cluster [DBG] 10.13 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1179648 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 207)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:25.311390+0000 osd.1 (osd.1) 206 : cluster [DBG] 10.13 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:25.321967+0000 osd.1 (osd.1) 207 : cluster [DBG] 10.13 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:57.266152+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1179648 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:58.266295+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1171456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:59.266399+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1171456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:00.266562+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:30.195689+0000 osd.1 (osd.1) 208 : cluster [DBG] 10.10 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:30.206247+0000 osd.1 (osd.1) 209 : cluster [DBG] 10.10 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1171456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967645 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 209)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:30.195689+0000 osd.1 (osd.1) 208 : cluster [DBG] 10.10 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:30.206247+0000 osd.1 (osd.1) 209 : cluster [DBG] 10.10 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:01.266801+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:31.224626+0000 osd.1 (osd.1) 210 : cluster [DBG] 10.14 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:31.238787+0000 osd.1 (osd.1) 211 : cluster [DBG] 10.14 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 1163264 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 211)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:31.224626+0000 osd.1 (osd.1) 210 : cluster [DBG] 10.14 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:31.238787+0000 osd.1 (osd.1) 211 : cluster [DBG] 10.14 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.870033264s of 10.907306671s, submitted: 12
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:02.267010+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:32.229898+0000 osd.1 (osd.1) 212 : cluster [DBG] 10.12 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:32.244007+0000 osd.1 (osd.1) 213 : cluster [DBG] 10.12 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 1155072 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 213)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:32.229898+0000 osd.1 (osd.1) 212 : cluster [DBG] 10.12 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:32.244007+0000 osd.1 (osd.1) 213 : cluster [DBG] 10.12 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:03.267253+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1146880 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:04.267408+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1146880 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:05.267626+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:34.285939+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.15 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:34.310690+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.15 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 1130496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974888 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 215)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:34.285939+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.15 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:34.310690+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.15 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:06.267995+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 1 last_log 216 sent 215 num 1 unsent 1 sending 1
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:36.236429+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.14 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 1130496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 216)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:36.236429+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.14 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:07.268276+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 1 last_log 217 sent 216 num 1 unsent 1 sending 1
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:36.275301+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.14 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 1130496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 217)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:36.275301+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.14 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:08.268459+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 1 last_log 218 sent 217 num 1 unsent 1 sending 1
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:38.219793+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.0 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1138688 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 218)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:38.219793+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.0 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:09.268637+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 3 last_log 221 sent 218 num 3 unsent 3 sending 3
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:38.269156+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.0 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:39.207881+0000 osd.1 (osd.1) 220 : cluster [DBG] 9.2 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:39.250167+0000 osd.1 (osd.1) 221 : cluster [DBG] 9.2 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1114112 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 221)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:38.269156+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.0 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:39.207881+0000 osd.1 (osd.1) 220 : cluster [DBG] 9.2 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:39.250167+0000 osd.1 (osd.1) 221 : cluster [DBG] 9.2 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:10.268843+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1105920 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982123 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:11.269006+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1105920 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:12.269163+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 223 sent 221 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:42.166901+0000 osd.1 (osd.1) 222 : cluster [DBG] 9.a scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:42.209249+0000 osd.1 (osd.1) 223 : cluster [DBG] 9.a scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 1097728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 223)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:42.166901+0000 osd.1 (osd.1) 222 : cluster [DBG] 9.a scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:42.209249+0000 osd.1 (osd.1) 223 : cluster [DBG] 9.a scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.865514755s of 10.889443398s, submitted: 12
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:13.269396+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 225 sent 223 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:43.119360+0000 osd.1 (osd.1) 224 : cluster [DBG] 9.4 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:43.165264+0000 osd.1 (osd.1) 225 : cluster [DBG] 9.4 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 1097728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 225)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:43.119360+0000 osd.1 (osd.1) 224 : cluster [DBG] 9.4 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:43.165264+0000 osd.1 (osd.1) 225 : cluster [DBG] 9.4 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:14.269580+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 227 sent 225 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:44.095017+0000 osd.1 (osd.1) 226 : cluster [DBG] 9.1a scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:44.119727+0000 osd.1 (osd.1) 227 : cluster [DBG] 9.1a scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1089536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 227)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:44.095017+0000 osd.1 (osd.1) 226 : cluster [DBG] 9.1a scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:44.119727+0000 osd.1 (osd.1) 227 : cluster [DBG] 9.1a scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:15.269821+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1089536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989358 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:16.269969+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1081344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:17.270132+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 229 sent 227 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:47.009369+0000 osd.1 (osd.1) 228 : cluster [DBG] 9.12 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:47.037607+0000 osd.1 (osd.1) 229 : cluster [DBG] 9.12 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1089536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 229)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:47.009369+0000 osd.1 (osd.1) 228 : cluster [DBG] 9.12 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:47.037607+0000 osd.1 (osd.1) 229 : cluster [DBG] 9.12 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:18.270451+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1089536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:19.270572+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1081344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:20.270757+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1081344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991771 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:21.270942+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1073152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:22.271080+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 231 sent 229 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:52.017928+0000 osd.1 (osd.1) 230 : cluster [DBG] 9.10 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:52.035580+0000 osd.1 (osd.1) 231 : cluster [DBG] 9.10 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 231)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:52.017928+0000 osd.1 (osd.1) 230 : cluster [DBG] 9.10 scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:52.035580+0000 osd.1 (osd.1) 231 : cluster [DBG] 9.10 scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1073152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:23.271312+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1064960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.867640495s of 10.956263542s, submitted: 8
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:24.271461+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 233 sent 231 num 2 unsent 2 sending 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:54.075663+0000 osd.1 (osd.1) 232 : cluster [DBG] 9.1f scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  will send 2025-12-04T10:21:54.103736+0000 osd.1 (osd.1) 233 : cluster [DBG] 9.1f scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 233)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:54.075663+0000 osd.1 (osd.1) 232 : cluster [DBG] 9.1f scrub starts
Dec 04 10:53:01 compute-0 ceph-osd[87071]: log_client  logged 2025-12-04T10:21:54.103736+0000 osd.1 (osd.1) 233 : cluster [DBG] 9.1f scrub ok
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1064960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:25.271631+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1064960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:26.271758+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1056768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:27.271906+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1056768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:28.272048+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1048576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:29.272232+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1048576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:30.272447+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1048576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:31.272580+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1040384 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:32.272768+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1040384 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:33.272957+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 1032192 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:34.273179+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 1032192 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:35.273317+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 1024000 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:36.273462+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 1015808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:37.273629+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 1015808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:38.273776+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 1007616 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:39.273911+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 1007616 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:40.274044+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 1007616 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:41.274349+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 999424 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:42.274546+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 999424 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:43.274744+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 991232 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:44.274901+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 991232 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:45.275142+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 983040 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:46.275357+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 983040 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:47.275504+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 983040 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:48.275700+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 974848 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:49.275844+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 974848 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:50.276015+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 974848 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:51.276187+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 966656 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:52.276400+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 966656 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:53.276606+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 958464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:54.276828+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 958464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:55.276960+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 950272 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:56.277118+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 950272 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:57.277237+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 942080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:58.277369+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 942080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:59.277547+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 942080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:00.277717+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 933888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:01.277850+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 933888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:02.277954+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 925696 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:03.278142+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 925696 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:04.278249+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 925696 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:05.278407+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 917504 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:06.278539+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 917504 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:07.278666+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 909312 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:08.278793+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 909312 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:09.278973+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 909312 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:10.279058+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 901120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:11.279180+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 901120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:12.279366+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 901120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:13.279558+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 892928 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:14.279696+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 892928 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:15.279824+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:16.279979+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:17.280187+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 868352 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:18.280370+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 868352 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:19.280566+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:20.280702+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:21.280862+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:22.281027+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:23.281156+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:24.281301+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:25.281470+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:26.281618+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:27.281747+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 835584 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:28.281875+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 835584 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:29.282021+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 835584 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:30.282166+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 827392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:31.282311+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 827392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:32.282549+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 819200 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:33.282833+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 819200 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:34.282979+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 811008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:35.283179+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 811008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:36.283328+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 811008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:37.283531+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 794624 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:38.283727+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 794624 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:39.283877+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 786432 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:40.284019+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 786432 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:41.284267+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 786432 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:42.284422+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 778240 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:43.284611+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 778240 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:44.284743+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 770048 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:45.284871+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 770048 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:46.285035+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 761856 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:47.285171+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 761856 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:48.285316+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 761856 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:49.285495+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 753664 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:50.285627+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 753664 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:51.285777+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 753664 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:52.285934+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 745472 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:53.286125+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 745472 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:54.286260+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 737280 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:55.286389+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 737280 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:56.286538+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 729088 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:57.286686+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 729088 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:58.286826+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 729088 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:59.286978+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 720896 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:00.287166+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 720896 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:01.287306+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 712704 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:02.287475+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 712704 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:03.287646+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 712704 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:04.287778+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 704512 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:05.287925+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 704512 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:06.288094+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 704512 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:07.288318+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 696320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:08.288473+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 696320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:09.288706+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 696320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:10.288930+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 688128 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:11.289046+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 688128 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:12.289193+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 679936 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:13.289429+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 679936 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:14.289611+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 671744 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:15.289748+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 671744 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:16.289881+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 671744 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:17.290022+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 663552 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:18.290207+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 663552 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:19.290386+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 655360 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:20.290549+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 655360 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:21.290745+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 655360 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:22.290932+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 647168 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:23.291201+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 647168 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:24.291396+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 638976 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:25.291609+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 638976 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:26.291841+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 638976 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:27.292024+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 630784 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:28.292223+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 630784 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:29.292402+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 622592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:30.292697+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 622592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:31.292917+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 622592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:32.293122+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 614400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:33.293365+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 614400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:34.293549+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 606208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:35.293799+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 606208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:36.294017+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 606208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:37.294302+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 598016 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:38.294508+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 598016 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:39.294716+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 589824 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:40.294892+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 589824 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:41.295237+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 589824 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:42.295504+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 581632 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:43.295739+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 581632 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:44.295914+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 573440 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:45.296145+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 573440 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:46.296340+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 565248 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:47.296518+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 565248 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:48.296763+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 557056 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:49.296957+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 557056 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:50.297189+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 557056 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:51.297365+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 548864 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:52.297551+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 548864 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:53.297755+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 548864 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:54.298012+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 540672 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:55.298236+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 540672 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:56.298517+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 532480 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:57.298735+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 532480 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:58.299047+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 532480 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:59.299271+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 524288 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:00.299494+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 524288 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:01.299926+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 516096 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:02.300281+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 516096 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:03.300641+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 516096 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:04.300834+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 507904 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:05.301010+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 507904 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:06.301184+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 499712 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:07.301366+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 499712 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:08.301507+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 499712 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:09.301658+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 491520 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:10.301794+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 491520 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:11.301942+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 483328 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:12.302209+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 483328 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:13.302397+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 475136 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:14.302579+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 475136 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:15.302755+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 475136 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:16.303005+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 466944 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:17.303221+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 466944 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:18.303379+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 458752 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:19.303562+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 458752 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:20.303794+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 458752 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:21.304181+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 450560 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:22.304438+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 450560 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:23.304621+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 442368 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:24.304805+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 442368 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:25.304899+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 442368 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:26.305014+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 434176 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:27.305192+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 434176 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:28.305361+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 425984 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:29.305496+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 425984 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:30.305672+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 425984 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:31.305821+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 417792 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:32.306023+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 417792 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:33.306233+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 409600 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:34.306396+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 409600 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:35.306651+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 401408 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:36.306813+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 401408 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:37.307019+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 401408 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:38.307234+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 393216 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:39.307508+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 393216 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:40.307731+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 385024 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:41.307876+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 385024 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:42.308020+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 376832 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:43.308227+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 376832 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:44.308345+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 376832 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:45.308473+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 368640 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:46.308624+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 368640 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:47.308780+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 360448 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:48.308928+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 360448 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:49.309057+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 360448 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:50.309203+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 352256 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:51.309352+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 352256 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:52.309550+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 344064 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:53.309765+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 344064 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:54.309933+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 335872 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:55.310172+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 335872 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:56.310449+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 335872 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:57.310728+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 327680 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:58.311007+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Cumulative writes: 6918 writes, 28K keys, 6918 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6918 writes, 1283 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6918 writes, 28K keys, 6918 commit groups, 1.0 writes per commit group, ingest: 19.58 MB, 0.03 MB/s
                                           Interval WAL: 6918 writes, 1283 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 245760 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:59.311187+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 237568 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:00.311367+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 237568 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:01.311501+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 237568 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:02.311700+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 221184 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:03.311920+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 221184 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:04.312147+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 212992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:05.312278+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 212992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:06.312421+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 212992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:07.312552+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 204800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:08.312721+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 204800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:09.312844+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 196608 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:10.312960+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 196608 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:11.313066+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 188416 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:12.313197+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 188416 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:13.313356+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 180224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:14.313480+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 180224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:15.313614+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 180224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:16.313760+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 180224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:17.313880+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 172032 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:18.314029+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 172032 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:19.314181+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 163840 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:20.314331+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 163840 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:21.314480+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 155648 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:22.314609+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 155648 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:23.315126+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 147456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:24.315242+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 147456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:25.315342+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 147456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:26.315502+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:27.315643+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 139264 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:28.315785+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 139264 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:29.316016+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 139264 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:30.316187+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 131072 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:31.316325+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 131072 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:32.316494+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 122880 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:33.316684+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 122880 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:34.316820+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 114688 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:35.316937+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 114688 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:36.317070+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 106496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:37.317206+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 106496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:38.317369+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 106496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:39.317530+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 98304 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:40.317702+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 98304 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:41.317852+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 90112 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:42.317994+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 90112 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:43.318147+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 90112 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:44.318281+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 81920 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:45.318430+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 81920 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:46.318604+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 73728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:47.318781+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 73728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:48.318921+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 73728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:49.319059+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 65536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:50.319180+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 65536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:51.319370+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 57344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:52.319540+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 57344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:53.319760+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 57344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:54.319894+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 49152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:55.320019+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 49152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:56.320149+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 40960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:57.320263+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 49152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:58.320390+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 49152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:59.320519+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 40960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:00.320653+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 40960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:01.320796+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 32768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:02.320941+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 32768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:03.321145+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 32768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:04.321282+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 24576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:05.321429+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 24576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:06.358176+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 16384 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:07.358331+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 8192 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:08.358481+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 8192 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:09.358650+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 0 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:10.358781+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 0 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:11.358929+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:12.359071+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:13.359290+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:14.359510+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:15.359730+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:16.359881+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:17.360024+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:18.360241+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:19.360413+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:20.360563+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:21.360690+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:22.360838+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.083770752s of 299.087066650s, submitted: 2
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:23.360968+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:24.361135+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 974848 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:25.361258+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 974848 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:26.361396+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:27.361531+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:28.361657+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:29.361861+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:30.362146+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:31.362324+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:32.362460+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:33.362973+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:34.363176+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 958464 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:35.363308+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 958464 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:36.363440+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 950272 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:37.363651+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 950272 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:38.363867+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 950272 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:39.364009+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 942080 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:40.364220+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 942080 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:41.364385+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 933888 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:42.364572+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 933888 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:43.364764+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 925696 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:44.364948+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 925696 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:45.365077+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 917504 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:46.365351+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 917504 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:47.365500+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 909312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:48.365645+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 909312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:49.365797+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 909312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:50.365937+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 901120 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:51.366060+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 901120 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:52.366231+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:53.366380+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:54.366523+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:55.366660+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:56.366850+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:57.367024+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:58.367167+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:59.367362+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:00.367510+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:01.367693+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:02.367834+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:03.368078+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:04.368243+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:05.368415+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:06.368578+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:07.368709+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:08.368905+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:09.369124+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:10.369337+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:11.369629+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:12.369777+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:13.369969+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:14.370151+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:15.370343+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:16.370550+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:17.370847+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:18.371044+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:19.372387+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:20.372533+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:21.372728+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:22.372897+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:23.373844+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:24.373989+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:25.374458+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:26.374601+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:27.374773+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:28.374920+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:29.375068+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:30.375214+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:31.375561+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:32.375676+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:33.375829+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:34.375995+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:35.376186+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:36.376365+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:37.376591+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:38.376857+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:39.377167+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:40.377399+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:41.377525+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:42.377668+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:43.377837+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:44.377983+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:45.378152+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:46.378295+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:47.378436+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:48.378579+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:49.378722+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:50.378869+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:51.379014+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:52.379246+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:53.379464+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:54.379637+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:55.379777+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:56.379927+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:57.380132+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:58.380345+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:59.380513+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:00.380652+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:01.380780+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:02.380903+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:03.381066+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:04.381210+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:05.381361+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:06.381514+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:07.381662+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:08.381811+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:09.381953+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:10.382115+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:11.382271+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:12.382413+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:13.382638+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:14.382829+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:15.382979+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:16.383183+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:17.383334+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:18.383465+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:19.383628+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:20.383767+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:21.383938+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:22.384080+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:23.384293+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:24.384584+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:25.384711+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:26.384840+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:27.384976+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:28.385159+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:29.385282+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:30.385449+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:31.385624+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:32.385753+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:33.385933+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:34.386150+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:35.386286+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:36.386443+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:37.386668+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:38.386849+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:39.387042+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:40.387169+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:41.387313+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:42.387459+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:43.387608+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:44.387781+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:45.387927+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:46.388092+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:47.388313+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:48.388472+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:49.388602+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:50.388773+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:51.388948+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:52.389089+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:53.389288+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:54.389567+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:55.389692+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:56.389826+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:57.389961+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:58.390086+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:59.390237+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:00.390377+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:01.390532+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:02.390675+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:03.390928+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:04.391078+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:05.391241+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:06.391383+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:07.391523+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:08.391650+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:09.391830+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:10.391983+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:11.392176+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:12.392341+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:13.392571+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:14.392724+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:15.392858+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:16.393012+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:17.393203+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:18.393346+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:19.393473+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:20.393625+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:21.393765+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:22.393903+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:23.394060+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:24.394179+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:25.394299+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:26.394428+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:27.394584+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:28.394717+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:29.394867+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:30.395012+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:31.395194+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:32.395341+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:33.395510+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:34.395666+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:35.395839+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:36.395994+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:37.396165+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:38.396302+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:39.396453+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:40.396588+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:41.396733+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:42.396955+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:43.397202+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:44.397394+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:45.397530+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:46.397707+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:47.397930+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:48.398244+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:49.398425+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:50.398636+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:51.398839+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:52.399080+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:53.399336+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:54.399585+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:55.399763+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:56.399917+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:57.400183+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:58.400386+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:59.400568+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:00.400839+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:01.400996+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:02.401180+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:03.401591+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:04.401767+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:05.401885+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:06.402017+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:07.402154+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:08.402285+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:09.402429+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:10.402582+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:11.402720+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:12.402857+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:13.403053+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 ms_handle_reset con 0x5590067fb800 session 0x559004f09340
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559009534400
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 ms_handle_reset con 0x5590071f1800 session 0x5590071bafc0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559007746000
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:14.403312+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:15.403479+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:16.403620+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:17.403796+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:18.403932+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:19.404318+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:20.404553+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:21.404817+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:22.405241+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:23.405540+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:24.405794+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:25.406073+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:26.406303+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:27.406483+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:28.406679+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:29.406856+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:30.407030+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:31.407191+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:32.407427+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:33.407676+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:34.407921+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:35.408141+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:36.408319+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:37.408498+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:38.408729+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:39.408941+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:40.409142+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:41.409374+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:42.409551+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:43.409757+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:44.409885+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:45.410069+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:46.410266+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:47.410393+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:48.410544+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:49.410737+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:50.410922+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:51.411107+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:52.411306+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:53.411840+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:54.411958+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:55.412079+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:56.412251+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:57.412384+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:58.412575+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:59.412736+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:00.412894+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:01.413052+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:02.413176+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:03.413361+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:04.413508+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:05.413700+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:06.413873+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:07.414203+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:08.414371+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:09.414504+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:10.414660+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:11.414808+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:12.414936+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:13.415165+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:14.415344+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:15.415474+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:16.415627+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:17.415741+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:18.416141+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:19.416332+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:20.416544+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:21.416762+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:22.416892+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.065643311s of 300.198425293s, submitted: 90
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:23.417044+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:24.417219+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:25.417379+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:26.417563+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:27.417754+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:28.417902+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:29.418055+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:30.418189+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:31.418327+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:32.418482+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:33.418639+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:34.418800+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:35.418948+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:36.419167+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:37.419348+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:38.419550+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:39.419697+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:40.419888+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:41.420043+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:42.420154+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:43.420299+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:44.420473+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:45.420604+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:46.420730+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:47.420891+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:48.421025+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:49.421170+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:50.421300+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:51.421461+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:52.421592+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:53.421742+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:54.421892+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:55.422035+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:56.422189+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:57.422405+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:58.423081+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:59.423335+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:00.423472+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:01.423594+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:02.423742+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:03.423966+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:04.424380+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:05.424532+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:06.424696+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:07.424841+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:08.424970+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:09.425161+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:10.425316+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:11.425493+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:12.425632+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:13.425807+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:14.425989+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:15.426167+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:16.426308+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:17.426444+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:18.426603+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:19.426738+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:20.426874+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:21.427077+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:22.427270+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:23.427455+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:24.427584+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:25.427724+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:26.427887+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:27.428061+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:28.428269+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:29.428418+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:30.428627+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:31.428764+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:32.428929+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:33.429111+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:34.429259+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:35.429383+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:36.429547+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:37.429660+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:38.429797+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:39.429890+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:40.430015+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:41.430159+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:42.430305+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:43.430399+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:44.430515+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:45.430625+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:46.430763+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:47.430958+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:48.431111+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:49.431343+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:50.431496+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:51.431631+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:52.431750+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:53.431940+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:54.432084+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:55.432237+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:56.432391+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:57.432530+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:58.432691+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:59.432822+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:00.432991+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:01.433177+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:02.433317+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:03.433512+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:04.433642+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:05.433776+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:06.433945+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:07.434085+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:08.434227+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:09.434435+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:10.434615+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:11.434804+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:12.434970+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:13.435187+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:14.435480+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:15.435685+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:16.435867+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:17.436018+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:18.436211+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:19.436408+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:20.436559+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:21.436726+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:22.436877+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:23.437074+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:24.437198+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:25.437369+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:26.437519+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:27.437634+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:28.437774+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:29.437974+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:30.438192+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:31.438395+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:32.438554+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:33.438729+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:34.438947+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:35.439251+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:36.439441+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:37.439564+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:38.439744+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:39.439876+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:40.440020+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:41.440199+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:42.440379+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:43.440607+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:44.440777+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:45.440936+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:46.441071+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:47.441193+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:48.441394+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:49.441577+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:50.441748+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:51.441948+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:52.442155+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:53.442339+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:54.442739+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:55.442906+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:56.443064+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:57.443201+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:58.443348+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:59.443500+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:00.443625+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:01.443814+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:02.443969+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:03.444155+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:04.444389+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:05.444565+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:06.444739+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:07.444886+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:08.445037+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:09.445253+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:10.445375+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:11.445509+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:12.445650+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:13.445862+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:14.446002+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:15.446238+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:16.446391+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:17.446530+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:18.446692+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:19.446832+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000037s
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:20.446989+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:21.447165+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:22.447324+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:23.447468+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:24.447621+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:25.447760+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:26.447895+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:27.448432+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:28.448602+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:29.448749+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:30.448930+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:31.449060+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:32.449228+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:33.449400+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:34.449553+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:35.449694+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:36.449820+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:37.449954+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:38.450123+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:39.450270+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:40.450408+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:41.450597+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:42.450724+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:43.450924+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:44.451190+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:45.451313+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:46.451472+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:47.452078+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:48.452648+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:49.453509+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:50.454010+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:51.454333+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:52.454778+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:53.455169+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:54.455520+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:55.455743+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:56.455985+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:57.456124+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:58.456450+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Cumulative writes: 7142 writes, 28K keys, 7142 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7142 writes, 1395 syncs, 5.12 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:59.456696+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:00.456915+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:01.457135+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:02.457280+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:03.457458+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:04.457599+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:05.457746+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:06.457870+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:07.458027+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:08.458229+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:09.458471+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:10.458713+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:11.458909+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:12.459084+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:13.459286+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:14.459407+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:15.459603+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:16.459871+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:17.460069+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:18.460268+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:19.460444+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:20.460596+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:21.460799+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:22.460951+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:23.461170+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:24.461316+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:25.461437+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:26.461639+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:27.461809+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:28.461903+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:29.462077+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:30.462316+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:31.462465+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:32.462608+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:33.462803+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:34.462964+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:35.463154+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:36.463326+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:37.463463+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:38.463684+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:39.463889+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:40.464062+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:41.464245+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:42.464397+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:43.464613+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:44.464738+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:45.464897+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:46.465086+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:47.465331+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:48.465487+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:49.465659+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:50.465824+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:51.466019+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:52.466182+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:53.466391+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:54.466569+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:55.466726+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:56.466919+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:57.467091+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:58.467383+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:59.467501+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:00.467645+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:01.467792+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:02.467956+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:03.468190+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:04.468343+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:05.468550+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:06.468749+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:07.468998+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:08.469141+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:09.469417+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:10.469629+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:11.469871+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:12.470047+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:13.470291+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:14.470427+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:15.470561+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:16.470702+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:17.470891+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:18.471048+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:19.471224+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:20.471373+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:21.471529+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:22.471693+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.836059570s of 299.876525879s, submitted: 22
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:23.471843+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:24.471974+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:25.472132+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:26.472260+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:27.472382+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:28.472513+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:29.472625+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:30.472767+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:31.472928+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:32.473176+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:33.473395+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:34.473524+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:35.473684+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:36.473811+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:37.473985+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:38.474082+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:39.474314+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:40.474446+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:41.474664+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:42.474818+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:43.475005+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:44.475148+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:45.475325+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:46.475502+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:47.475642+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:48.475785+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:49.475922+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:50.476037+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:51.476284+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:52.476429+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:53.476605+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:54.476780+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:55.476964+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:56.477522+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:57.478043+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:58.478533+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:59.478683+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:00.479089+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:01.479583+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:02.479825+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:03.480151+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:04.480491+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:05.480652+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:06.480925+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:07.481180+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:08.481340+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:09.481573+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:10.481733+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:11.481962+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:12.482206+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:13.482464+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:14.482781+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:15.482980+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:16.483179+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:17.483392+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:18.483623+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:19.483784+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:20.483958+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:21.484156+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:22.484316+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:23.484589+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:24.484735+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:25.484933+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:26.485118+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:27.485358+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:28.485543+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:29.485671+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:30.485809+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:31.485950+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:32.486127+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:33.486859+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:34.486982+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:35.487167+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:36.487316+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:37.487525+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:38.487676+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:39.578043+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:40.578182+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:41.578354+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:42.578502+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:43.578730+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:44.578952+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:45.579149+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:46.579317+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:47.579412+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:48.579534+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:49.579661+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:50.579861+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:51.580038+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:52.580265+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:53.580551+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:54.580734+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:55.580968+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:56.581180+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:57.581370+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:58.581571+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:59.581715+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:00.581889+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:01.582293+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:02.583383+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:03.584650+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:04.585827+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:05.586617+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:06.587145+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:07.587378+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:08.587549+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:09.587783+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:10.587946+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:11.588305+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:12.588665+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:13.589023+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:14.589356+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:15.589581+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:16.589744+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:17.589968+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:18.590172+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:19.590417+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008cfc000
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 116.996520996s of 117.133117676s, submitted: 90
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 753664 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:20.590639+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce52000/0x0/0x4ffc00000, data 0x11abd4/0x1d8000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 745472 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:21.590844+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 17481728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 120 ms_handle_reset con 0x559008cfc000 session 0x559009955340
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:22.591009+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008d7f400
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 90808320 unmapped: 8806400 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:23.591183+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136303 data_alloc: 218103808 data_used: 5976
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 17096704 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 121 ms_handle_reset con 0x559008d7f400 session 0x559008e3d880
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x191e3ca/0x19e1000, compress 0x0/0x0/0x0, omap 0x106c6, meta 0x2bbf93a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:24.591370+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:25.591617+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:26.591971+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:27.592192+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:28.592380+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141665 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:29.592700+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:30.592921+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:31.593234+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:32.593493+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:33.593718+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144007 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:34.593878+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:35.594036+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:36.594217+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:37.594622+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:38.594768+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144007 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:39.594954+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:40.595120+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:41.595265+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:42.595480+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.466564178s of 22.644350052s, submitted: 41
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:43.595717+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 10
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143287 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:44.595897+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb644000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x111b8, meta 0x2bbee48), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:45.596058+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb644000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x111b8, meta 0x2bbee48), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008cf0400
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:46.596175+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 16064512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:47.596359+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 16064512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:48.596535+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144979 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:49.596704+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb643000/0x0/0x4ffc00000, data 0x1921abf/0x19e9000, compress 0x0/0x0/0x0, omap 0x11671, meta 0x2bbe98f), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:50.596839+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:51.596983+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:52.597189+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb643000/0x0/0x4ffc00000, data 0x1921abf/0x19e9000, compress 0x0/0x0/0x0, omap 0x116d5, meta 0x2bbe92b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.574191093s of 10.002868652s, submitted: 9
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:53.597363+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 11
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149895 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:54.597557+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:55.597733+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb641000/0x0/0x4ffc00000, data 0x1921c53/0x19ea000, compress 0x0/0x0/0x0, omap 0x11be1, meta 0x2bbe41f), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 122 handle_osd_map epochs [122,123], i have 123, src has [1,123]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:56.597982+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:57.598190+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:58.598340+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb63b000/0x0/0x4ffc00000, data 0x19239f3/0x19ef000, compress 0x0/0x0/0x0, omap 0x1244f, meta 0x2bbdbb1), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153339 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:59.598481+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:00.598625+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb63d000/0x0/0x4ffc00000, data 0x1923a58/0x19ef000, compress 0x0/0x0/0x0, omap 0x125d3, meta 0x2bbda2d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:01.598777+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:02.598945+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.307727814s of 10.003003120s, submitted: 55
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:03.599188+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157951 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:04.599387+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:05.599597+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb638000/0x0/0x4ffc00000, data 0x19255a1/0x19f2000, compress 0x0/0x0/0x0, omap 0x13360, meta 0x2bbcca0), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:06.599781+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:07.599954+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:08.600215+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156927 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:09.600380+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x192566b/0x19f2000, compress 0x0/0x0/0x0, omap 0x13585, meta 0x2bbca7b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:10.600502+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:11.600673+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 14950400 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:12.600815+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:13.600983+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13a6f, meta 0x2bbc591), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158173 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:14.601199+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:15.601363+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.946354866s of 13.003514290s, submitted: 32
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:16.601528+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13b4d, meta 0x2bbc4b3), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:17.601656+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:18.601816+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157455 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:19.601971+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13d95, meta 0x2bbc26b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 14876672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:20.602142+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 14876672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:21.602344+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x192589a/0x19f3000, compress 0x0/0x0/0x0, omap 0x13f02, meta 0x2bbc0fe), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:22.602487+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:23.602654+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158813 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:24.602809+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 14983168 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:25.602935+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 14983168 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.955944061s of 10.002535820s, submitted: 20
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:26.603066+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb63c000/0x0/0x4ffc00000, data 0x192585d/0x19f0000, compress 0x0/0x0/0x0, omap 0x14225, meta 0x2bbbddb), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:27.603202+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:28.603353+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163967 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:29.603504+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:30.603648+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb638000/0x0/0x4ffc00000, data 0x19275c7/0x19f4000, compress 0x0/0x0/0x0, omap 0x149f5, meta 0x2bbb60b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:31.603819+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:32.603943+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:33.604126+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 125 handle_osd_map epochs [125,126], i have 126, src has [1,126]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x19275f6/0x19f3000, compress 0x0/0x0/0x0, omap 0x14b15, meta 0x2bbb4eb), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165991 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:34.604239+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb634000/0x0/0x4ffc00000, data 0x19290da/0x19f6000, compress 0x0/0x0/0x0, omap 0x14d43, meta 0x2bbb2bd), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:35.604364+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.678595543s of 10.002803802s, submitted: 82
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:36.604506+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:37.604636+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:38.604754+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb636000/0x0/0x4ffc00000, data 0x1929209/0x19f6000, compress 0x0/0x0/0x0, omap 0x1422b, meta 0x2bbbdd5), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166245 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:39.604883+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:40.605023+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:41.605184+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:42.605326+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb636000/0x0/0x4ffc00000, data 0x1929209/0x19f6000, compress 0x0/0x0/0x0, omap 0x14347, meta 0x2bbbcb9), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:43.605478+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165655 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:44.605629+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:45.605754+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:46.605904+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb637000/0x0/0x4ffc00000, data 0x1929238/0x19f5000, compress 0x0/0x0/0x0, omap 0x1457f, meta 0x2bbba81), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:47.606041+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:48.606299+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.570578575s of 13.004203796s, submitted: 16
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:49.606610+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165495 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:50.606885+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:51.607192+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb637000/0x0/0x4ffc00000, data 0x1929238/0x19f5000, compress 0x0/0x0/0x0, omap 0x14627, meta 0x2bbb9d9), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008d81400
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:52.607335+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 14770176 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb635000/0x0/0x4ffc00000, data 0x19293e5/0x19f7000, compress 0x0/0x0/0x0, omap 0x14747, meta 0x2bbb8b9), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 12
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:53.607630+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 14696448 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:54.607900+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170315 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 14688256 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:55.608143+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb635000/0x0/0x4ffc00000, data 0x19296ba/0x19f7000, compress 0x0/0x0/0x0, omap 0x14867, meta 0x2bbb799), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:56.608381+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb631000/0x0/0x4ffc00000, data 0x192b35a/0x19fb000, compress 0x0/0x0/0x0, omap 0x14ad6, meta 0x2bbb52a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:57.608597+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb631000/0x0/0x4ffc00000, data 0x192b35a/0x19fb000, compress 0x0/0x0/0x0, omap 0x14ad6, meta 0x2bbb52a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:58.608848+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887221336s of 10.005003929s, submitted: 46
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:59.609081+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173441 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 14647296 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:00.609368+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 14647296 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:01.609660+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 14622720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fb62d000/0x0/0x4ffc00000, data 0x192d066/0x19fd000, compress 0x0/0x0/0x0, omap 0x14fe9, meta 0x2bbb017), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:02.609939+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 14540800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:03.610184+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 14540800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:04.610446+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183767 data_alloc: 218103808 data_used: 6561
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 14516224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fb624000/0x0/0x4ffc00000, data 0x1930971/0x1a02000, compress 0x0/0x0/0x0, omap 0x15745, meta 0x2bba8bb), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:05.610598+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 12394496 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:06.610841+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 12394496 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:07.611194+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 12361728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:08.611396+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 12345344 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb61f000/0x0/0x4ffc00000, data 0x19360be/0x1a0b000, compress 0x0/0x0/0x0, omap 0x1637b, meta 0x2bb9c85), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.471419334s of 10.002448082s, submitted: 188
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:09.611566+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194735 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 12328960 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:10.611743+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 12328960 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:11.611940+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 12312576 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:12.612171+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 12263424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:13.612384+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 12263424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x1939a3d/0x1a12000, compress 0x0/0x0/0x0, omap 0x16e2c, meta 0x2bb91d4), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:14.612524+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199515 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:15.612773+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:16.612934+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:17.613287+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:18.613476+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.264224052s of 10.198055267s, submitted: 77
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:19.613668+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198365 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b56b/0x1a14000, compress 0x0/0x0/0x0, omap 0x1841a, meta 0x2bb7be6), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:20.613878+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:21.614190+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:22.614453+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:23.614698+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:24.614863+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199897 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:25.615180+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:26.615335+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:27.615507+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:28.615707+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.554255486s of 10.002036095s, submitted: 6
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:29.615873+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199753 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:30.616015+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b66b/0x1a15000, compress 0x0/0x0/0x0, omap 0x189ae, meta 0x2bb7652), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:31.616153+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:32.616313+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:33.616489+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:34.616649+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:35.616804+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:36.617029+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:37.617234+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:38.617444+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:39.617597+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:40.617749+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:41.617930+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:42.618203+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.060717583s of 14.003334045s, submitted: 8
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:43.618418+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:44.618567+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:45.618746+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ace7, meta 0x2bb5319), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:46.618886+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:47.619203+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:48.619430+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:49.619566+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:50.619719+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:51.619852+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:52.620000+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.796292305s of 10.001618385s, submitted: 11
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:53.620208+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:54.620356+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:55.620485+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:56.620677+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:57.620839+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 ms_handle_reset con 0x559008d81400 session 0x55900771efc0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:58.621010+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 13
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:59.621216+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:00.621352+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:01.621486+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10780672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:02.621622+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.715125084s of 10.001788139s, submitted: 197
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b864/0x1a15000, compress 0x0/0x0/0x0, omap 0x1b9f0, meta 0x2bb4610), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:03.621816+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:04.622026+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200871 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:05.622188+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 10764288 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:06.622386+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 10764288 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:07.622547+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 10747904 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:08.622771+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b993/0x1a15000, compress 0x0/0x0/0x0, omap 0x1c126, meta 0x2bb3eda), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 10731520 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:09.622927+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200281 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:10.623079+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:11.623255+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:12.623409+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193ba27/0x1a14000, compress 0x0/0x0/0x0, omap 0x1c596, meta 0x2bb3a6a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:13.623631+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:14.623813+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200297 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:15.624000+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.359195709s of 13.108038902s, submitted: 15
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:16.624190+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:17.624415+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193ba27/0x1a14000, compress 0x0/0x0/0x0, omap 0x1c66b, meta 0x2bb3995), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193ba27/0x1a14000, compress 0x0/0x0/0x0, omap 0x1c66b, meta 0x2bb3995), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:18.624573+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:19.624716+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200121 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:20.624875+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 10665984 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:21.625057+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 10665984 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:22.625212+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:23.625382+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb616000/0x0/0x4ffc00000, data 0x193bc27/0x1a16000, compress 0x0/0x0/0x0, omap 0x1cde8, meta 0x2bb3218), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:24.625530+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205053 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:25.625689+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.005226135s of 10.002261162s, submitted: 14
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:26.625861+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:27.626018+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:28.626179+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:29.626334+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb616000/0x0/0x4ffc00000, data 0x193bdbb/0x1a16000, compress 0x0/0x0/0x0, omap 0x1d70f, meta 0x2bb28f1), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205867 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb616000/0x0/0x4ffc00000, data 0x193bdbb/0x1a16000, compress 0x0/0x0/0x0, omap 0x1d70f, meta 0x2bb28f1), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:30.626561+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:31.626735+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:32.626891+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 10559488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:33.627062+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 10559488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:34.627205+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206603 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb615000/0x0/0x4ffc00000, data 0x193bf4f/0x1a17000, compress 0x0/0x0/0x0, omap 0x1de8c, meta 0x2bb2174), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 10534912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:35.627354+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 10534912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.561585426s of 10.002349854s, submitted: 22
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:36.628269+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb615000/0x0/0x4ffc00000, data 0x193bf4f/0x1a17000, compress 0x0/0x0/0x0, omap 0x1e199, meta 0x2bb1e67), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 10534912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:37.628401+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 10526720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb614000/0x0/0x4ffc00000, data 0x193c0b4/0x1a18000, compress 0x0/0x0/0x0, omap 0x1e227, meta 0x2bb1dd9), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:38.628566+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 10526720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:39.628709+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206715 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 10526720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:40.628924+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:41.629077+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:42.629308+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb613000/0x0/0x4ffc00000, data 0x193dd17/0x1a19000, compress 0x0/0x0/0x0, omap 0x1eb87, meta 0x2bb1479), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb613000/0x0/0x4ffc00000, data 0x193dd17/0x1a19000, compress 0x0/0x0/0x0, omap 0x1eb87, meta 0x2bb1479), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:43.629501+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:44.629687+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212201 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:45.629839+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f7c5/0x1a1b000, compress 0x0/0x0/0x0, omap 0x1f058, meta 0x2bb0fa8), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:46.630009+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.303206444s of 10.570754051s, submitted: 62
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f7c5/0x1a1b000, compress 0x0/0x0/0x0, omap 0x1f058, meta 0x2bb0fa8), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:47.630149+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:48.630361+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193f860/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1f202, meta 0x2bb0dfe), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:49.630515+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213893 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:50.630681+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:51.630823+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:52.630985+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:53.631171+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:54.631311+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f8fb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x1f4c8, meta 0x2bb0b38), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214721 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f8fb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x1f4c8, meta 0x2bb0b38), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:55.631475+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:56.631609+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:57.631775+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:58.631899+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.350849152s of 12.433691978s, submitted: 6
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:59.632007+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f8fb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x1f863, meta 0x2bb079d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214737 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:00.632138+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:01.632292+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:02.632457+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193f98f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1fbb7, meta 0x2bb0449), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:03.632660+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:04.632806+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214163 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:05.632975+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:06.633164+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:07.633312+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193f98f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1fcd3, meta 0x2bb032d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:08.633502+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.983986855s of 10.001843452s, submitted: 9
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:09.633695+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213987 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:10.633833+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193f98f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1ffe0, meta 0x2bb0020), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:11.633979+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:12.634130+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 10452992 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193fabe/0x1a1c000, compress 0x0/0x0/0x0, omap 0x200fc, meta 0x2baff04), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:13.634336+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:14.634494+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213987 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:15.634635+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 10444800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:16.634786+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:17.634933+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:18.635078+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fb32/0x1a1d000, compress 0x0/0x0/0x0, omap 0x20450, meta 0x2bafbb0), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:19.635255+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.504415512s of 10.521648407s, submitted: 9
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215695 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:20.635403+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 10428416 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:21.635585+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193faee/0x1a1d000, compress 0x0/0x0/0x0, omap 0x2075d, meta 0x2baf8a3), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 10428416 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:22.635740+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:23.635977+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193fcb6/0x1a1d000, compress 0x0/0x0/0x0, omap 0x20995, meta 0x2baf66b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:24.636131+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193fd51/0x1a1e000, compress 0x0/0x0/0x0, omap 0x20af8, meta 0x2baf508), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218185 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:25.636304+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:26.636453+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193fd51/0x1a1e000, compress 0x0/0x0/0x0, omap 0x20af8, meta 0x2baf508), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:27.636600+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:28.636716+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:29.636854+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.069396019s of 10.170839310s, submitted: 20
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217611 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:30.637030+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fd52/0x1a1d000, compress 0x0/0x0/0x0, omap 0x20eda, meta 0x2baf126), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:31.637263+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:32.637427+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fdb7/0x1a1d000, compress 0x0/0x0/0x0, omap 0x211e7, meta 0x2baee19), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:33.637605+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:34.637754+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218729 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:35.637905+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89210880 unmapped: 10403840 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:36.638052+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89210880 unmapped: 10403840 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:37.638213+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fe1c/0x1a1d000, compress 0x0/0x0/0x0, omap 0x21610, meta 0x2bae9f0), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:38.638396+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:39.638609+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193fe4b/0x1a1c000, compress 0x0/0x0/0x0, omap 0x217ba, meta 0x2bae846), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217995 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:40.638743+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:41.638992+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.992568970s of 12.027859688s, submitted: 18
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 10043392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:42.639197+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 10043392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:43.639377+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 10043392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:44.639561+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb5f7000/0x0/0x4ffc00000, data 0x19586a5/0x1a35000, compress 0x0/0x0/0x0, omap 0x21801, meta 0x2bae7ff), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223627 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 9969664 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:45.639689+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 7282688 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:46.640142+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0x1985ab0/0x1a62000, compress 0x0/0x0/0x0, omap 0x21964, meta 0x3d4e69c), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 7282688 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:47.640292+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92471296 unmapped: 7143424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:48.640424+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92528640 unmapped: 7086080 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:49.640563+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234575 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6799360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:50.640722+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 6848512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:51.640860+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3e9000/0x0/0x4ffc00000, data 0x19c5e65/0x1aa3000, compress 0x0/0x0/0x0, omap 0x22199, meta 0x3d4de67), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3e1000/0x0/0x4ffc00000, data 0x19cdc69/0x1aab000, compress 0x0/0x0/0x0, omap 0x22271, meta 0x3d4dd8f), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.162047386s of 10.287876129s, submitted: 58
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92839936 unmapped: 6774784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:52.640992+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 6479872 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:53.641202+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 6389760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3c0000/0x0/0x4ffc00000, data 0x19ed4c4/0x1acc000, compress 0x0/0x0/0x0, omap 0x22541, meta 0x3d4dabf), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:54.641345+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228761 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 6709248 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:55.641423+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3c0000/0x0/0x4ffc00000, data 0x19ed4c4/0x1acc000, compress 0x0/0x0/0x0, omap 0x22739, meta 0x3d4d8c7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 6610944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:56.641596+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 6316032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:57.641764+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93339648 unmapped: 6275072 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:58.641935+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 6266880 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:59.642159+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232545 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 6029312 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:00.642306+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa387000/0x0/0x4ffc00000, data 0x1a2844f/0x1b05000, compress 0x0/0x0/0x0, omap 0x22a96, meta 0x3d4d56a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94855168 unmapped: 4759552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:01.642466+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.685272217s of 10.002084732s, submitted: 83
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 4923392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:02.642601+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 5120000 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:03.642759+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 4947968 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:04.642880+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245549 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 4694016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:05.643003+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94797824 unmapped: 4816896 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:06.643165+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa305000/0x0/0x4ffc00000, data 0x1aa81ef/0x1b87000, compress 0x0/0x0/0x0, omap 0x235e8, meta 0x3d4ca18), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94797824 unmapped: 4816896 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:07.643309+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 4685824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa2fb000/0x0/0x4ffc00000, data 0x1ab291d/0x1b91000, compress 0x0/0x0/0x0, omap 0x238ae, meta 0x3d4c752), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:08.643498+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 139 handle_osd_map epochs [139,140], i have 140, src has [1,140]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 95059968 unmapped: 4554752 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:09.643640+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253243 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 95166464 unmapped: 4448256 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:10.643802+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa280000/0x0/0x4ffc00000, data 0x1b28cb3/0x1c0a000, compress 0x0/0x0/0x0, omap 0x2443b, meta 0x3d4bbc5), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 3039232 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:11.643946+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.631328583s of 10.002451897s, submitted: 112
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 3588096 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:12.644062+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 3457024 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:13.644277+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96247808 unmapped: 3366912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:14.644545+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa24d000/0x0/0x4ffc00000, data 0x1b5d7a7/0x1c3e000, compress 0x0/0x0/0x0, omap 0x249db, meta 0x3d4b625), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254393 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 3162112 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:15.644692+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 3063808 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:16.644840+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 3063808 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:17.644961+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 3276800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:18.645143+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 3276800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:19.645293+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa1ec000/0x0/0x4ffc00000, data 0x1bbf955/0x1ca0000, compress 0x0/0x0/0x0, omap 0x25173, meta 0x3d4ae8d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264037 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 2056192 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:20.645512+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 1835008 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:21.645664+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.661271095s of 10.002766609s, submitted: 80
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 1777664 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:22.645955+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 1687552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:23.646174+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98123776 unmapped: 2539520 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:24.646376+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268593 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa14e000/0x0/0x4ffc00000, data 0x1c5b085/0x1d3d000, compress 0x0/0x0/0x0, omap 0x25833, meta 0x3d4a7cd), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98238464 unmapped: 2424832 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:25.646645+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 2375680 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:26.646901+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 2170880 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:27.647090+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 2170880 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:28.647339+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 1843200 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:29.647505+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa137000/0x0/0x4ffc00000, data 0x1c75450/0x1d55000, compress 0x0/0x0/0x0, omap 0x25a20, meta 0x3d4a5e0), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa10b000/0x0/0x4ffc00000, data 0x1ca12c8/0x1d81000, compress 0x0/0x0/0x0, omap 0x25f3b, meta 0x3d4a0c5), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267149 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 589824 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:30.647738+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 589824 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:31.647919+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.730478287s of 10.002023697s, submitted: 71
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100179968 unmapped: 1531904 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:32.648147+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x1cf9eea/0x1ddb000, compress 0x0/0x0/0x0, omap 0x261c3, meta 0x3d49e3d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1712128 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:33.648307+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x1cf9eea/0x1ddb000, compress 0x0/0x0/0x0, omap 0x262e3, meta 0x3d49d1d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1712128 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:34.648505+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277587 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1515520 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:35.648649+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:36.648843+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1884160 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:37.649029+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1884160 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:38.649215+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1884160 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:39.649547+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 2023424 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa065000/0x0/0x4ffc00000, data 0x1d45c1e/0x1e27000, compress 0x0/0x0/0x0, omap 0x26ca5, meta 0x3d4935b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283849 data_alloc: 218103808 data_used: 7211
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:40.649757+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 2023424 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:41.649944+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 3072000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.822227478s of 10.002123833s, submitted: 99
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:42.650125+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa061000/0x0/0x4ffc00000, data 0x1d479b6/0x1e2b000, compress 0x0/0x0/0x0, omap 0x27427, meta 0x3d48bd9), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:43.650289+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:44.650564+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa061000/0x0/0x4ffc00000, data 0x1d47ae5/0x1e2b000, compress 0x0/0x0/0x0, omap 0x274b7, meta 0x3d48b49), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287513 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:45.650765+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:46.650971+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa05c000/0x0/0x4ffc00000, data 0x1d49580/0x1e2e000, compress 0x0/0x0/0x0, omap 0x27c2c, meta 0x3d483d4), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:47.651161+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:48.651331+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:49.651542+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1d4964a/0x1e2e000, compress 0x0/0x0/0x0, omap 0x27e6c, meta 0x3d48194), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287719 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:50.651687+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:51.651842+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:52.652141+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:53.652399+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.905422211s of 11.945888519s, submitted: 27
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1d4964a/0x1e2e000, compress 0x0/0x0/0x0, omap 0x27c0c, meta 0x3d483f4), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:54.652561+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294435 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:55.652685+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 3047424 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:56.661728+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 3047424 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:57.661863+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:58.662020+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fa058000/0x0/0x4ffc00000, data 0x1d4cd05/0x1e34000, compress 0x0/0x0/0x0, omap 0x285b1, meta 0x3d47a4f), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:59.662210+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293795 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:00.662368+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:01.662605+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:02.662751+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99729408 unmapped: 3031040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa054000/0x0/0x4ffc00000, data 0x1d4ea03/0x1e36000, compress 0x0/0x0/0x0, omap 0x28e3e, meta 0x3d471c2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:03.662897+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99729408 unmapped: 3031040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.821330070s of 10.003786087s, submitted: 83
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:04.663030+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 04 10:53:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/81841245' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299313 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:05.663196+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:06.663329+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa052000/0x0/0x4ffc00000, data 0x1d504cd/0x1e38000, compress 0x0/0x0/0x0, omap 0x2966d, meta 0x3d46993), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:07.663449+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:08.663617+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:09.663759+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 2998272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306283 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:10.663893+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 2981888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:11.663987+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 2981888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:12.664091+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 2981888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x1d53e14/0x1e3e000, compress 0x0/0x0/0x0, omap 0x2a2ce, meta 0x3d45d32), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:13.664258+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 2973696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.688855171s of 10.056839943s, submitted: 73
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:14.664367+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 2965504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306057 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:15.664543+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 2932736 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:16.664685+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 2924544 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:17.664822+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 2924544 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:18.665011+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 2924544 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa04a000/0x0/0x4ffc00000, data 0x1d55ba7/0x1e40000, compress 0x0/0x0/0x0, omap 0x2ab5c, meta 0x3d454a4), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:19.665152+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1892352 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311575 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:20.665266+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1892352 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:21.665385+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:22.665555+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:23.665717+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa049000/0x0/0x4ffc00000, data 0x1d576b7/0x1e43000, compress 0x0/0x0/0x0, omap 0x2b14e, meta 0x3d44eb2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:24.665884+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:25.666062+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310999 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:26.666222+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.768828392s of 13.002545357s, submitted: 53
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:27.666691+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:28.666844+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa049000/0x0/0x4ffc00000, data 0x1d576b7/0x1e43000, compress 0x0/0x0/0x0, omap 0x2b195, meta 0x3d44e6b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:29.667219+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:30.667532+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315337 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:31.667782+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:32.667970+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:33.668680+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa044000/0x0/0x4ffc00000, data 0x1d59284/0x1e47000, compress 0x0/0x0/0x0, omap 0x2b785, meta 0x3d4487b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:34.668855+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 1859584 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:35.669574+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315321 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 1859584 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:36.669998+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.965646744s of 10.002288818s, submitted: 29
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:37.670401+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:38.670901+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa044000/0x0/0x4ffc00000, data 0x1d592e8/0x1e47000, compress 0x0/0x0/0x0, omap 0x2bcca, meta 0x3d44336), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:39.671068+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:40.671240+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316439 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:41.671410+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:42.671586+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d592e6/0x1e47000, compress 0x0/0x0/0x0, omap 0x2bfd7, meta 0x3d44029), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:43.672003+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:44.672374+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:45.672617+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315689 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:46.673061+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa046000/0x0/0x4ffc00000, data 0x1d59285/0x1e46000, compress 0x0/0x0/0x0, omap 0x2c447, meta 0x3d43bb9), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:47.673311+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.925606728s of 11.002860069s, submitted: 14
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa046000/0x0/0x4ffc00000, data 0x1d59285/0x1e46000, compress 0x0/0x0/0x0, omap 0x2c447, meta 0x3d43bb9), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:48.673642+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:49.673898+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:50.674361+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315545 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:51.674494+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:52.674731+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:53.675007+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d592ea/0x1e47000, compress 0x0/0x0/0x0, omap 0x2ca1a, meta 0x3d435e6), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:54.675247+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 1818624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:55.675449+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315929 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 1818624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:56.675646+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 1818624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d592ea/0x1e47000, compress 0x0/0x0/0x0, omap 0x2cce0, meta 0x3d43320), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:57.675846+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.980464935s of 10.001944542s, submitted: 11
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:58.676036+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2807 syncs, 3.71 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3259 writes, 9856 keys, 3259 commit groups, 1.0 writes per commit group, ingest: 8.44 MB, 0.01 MB/s
                                           Interval WAL: 3259 writes, 1412 syncs, 2.31 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:59.676161+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:00.676288+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315801 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:01.676446+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 1802240 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d594e3/0x1e47000, compress 0x0/0x0/0x0, omap 0x2d07b, meta 0x3d42f85), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d594e3/0x1e47000, compress 0x0/0x0/0x0, omap 0x2d07b, meta 0x3d42f85), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:02.676625+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 1802240 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:03.676828+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 1802240 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:04.676992+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 1794048 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:05.677167+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315801 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:06.677283+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:07.677417+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.627371788s of 10.002052307s, submitted: 15
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d59677/0x1e47000, compress 0x0/0x0/0x0, omap 0x2d5c0, meta 0x3d42a40), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:08.677585+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:09.677747+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:10.677948+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315817 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:11.678123+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559005f52800 session 0x559004ecc000
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559007747800
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 1646592 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc ms_handle_reset ms_handle_reset con 0x5590067fa000
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: get_auth_request con 0x5590067fbc00 auth_method 0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_configure stats_period=5
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:12.678267+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 1662976 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559009534400 session 0x559007202a80
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5590091bdc00
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559007746000 session 0x559008c45180
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008a1cc00
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:13.678428+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa043000/0x0/0x4ffc00000, data 0x1d5980a/0x1e48000, compress 0x0/0x0/0x0, omap 0x2da77, meta 0x3d42589), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 1531904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:14.678557+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 1531904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:15.678705+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319383 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 1523712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:16.678827+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 2424832 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:17.678923+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x1d83259/0x1e72000, compress 0x0/0x0/0x0, omap 0x2dabe, meta 0x3d42542), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 2424832 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:18.679051+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101523456 unmapped: 2285568 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:19.679222+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101523456 unmapped: 2285568 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.993534088s of 11.781046867s, submitted: 23
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:20.679388+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330843 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1097728 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:21.679519+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102858752 unmapped: 950272 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:22.679683+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102367232 unmapped: 1441792 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:23.679847+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9fd7000/0x0/0x4ffc00000, data 0x1dc6753/0x1eb5000, compress 0x0/0x0/0x0, omap 0x2dcaf, meta 0x3d42351), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 1359872 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:24.679973+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 1359872 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9fac000/0x0/0x4ffc00000, data 0x1df0ce0/0x1ee0000, compress 0x0/0x0/0x0, omap 0x2ddcb, meta 0x3d42235), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:25.680185+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326003 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 1204224 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:26.680338+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102793216 unmapped: 1015808 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:27.680480+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9f57000/0x0/0x4ffc00000, data 0x1e467ac/0x1f35000, compress 0x0/0x0/0x0, omap 0x2e166, meta 0x3d41e9a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 103964672 unmapped: 1941504 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9f57000/0x0/0x4ffc00000, data 0x1e467ac/0x1f35000, compress 0x0/0x0/0x0, omap 0x2e166, meta 0x3d41e9a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:28.680604+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 103964672 unmapped: 1941504 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:29.680815+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 1703936 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.055461884s of 10.391435623s, submitted: 58
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:30.680981+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330561 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 1556480 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x1e57622/0x1f46000, compress 0x0/0x0/0x0, omap 0x2e166, meta 0x3d41e9a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:31.681830+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 1556480 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:32.682437+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 1556480 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559009277800
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:33.682852+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9ef1000/0x0/0x4ffc00000, data 0x1eab017/0x1f9b000, compress 0x0/0x0/0x0, omap 0x2e1f4, meta 0x3d41e0c), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104546304 unmapped: 1359872 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:34.683022+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 950272 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 14
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:35.683386+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351197 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 794624 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e8f000/0x0/0x4ffc00000, data 0x1f0c618/0x1ffd000, compress 0x0/0x0/0x0, omap 0x2e61d, meta 0x3d419e3), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:36.683926+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 1359872 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:37.684084+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e64000/0x0/0x4ffc00000, data 0x1f388bd/0x2028000, compress 0x0/0x0/0x0, omap 0x2e7c7, meta 0x3d41839), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 1351680 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:38.684582+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 1351680 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:39.684979+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e64000/0x0/0x4ffc00000, data 0x1f388bd/0x2028000, compress 0x0/0x0/0x0, omap 0x2e7c7, meta 0x3d41839), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 1163264 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.855733871s of 10.000440598s, submitted: 73
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:40.685434+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343261 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 3227648 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:41.685810+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104988672 unmapped: 3014656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:42.686006+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 2678784 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:43.686422+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e16000/0x0/0x4ffc00000, data 0x1f85c3e/0x2076000, compress 0x0/0x0/0x0, omap 0x2e9b8, meta 0x3d41648), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 2678784 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e16000/0x0/0x4ffc00000, data 0x1f85c3e/0x2076000, compress 0x0/0x0/0x0, omap 0x2e9b8, meta 0x3d41648), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:44.686789+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 2678784 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:45.687017+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348665 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 2490368 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:46.687238+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 2482176 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:47.687414+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 3538944 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9daa000/0x0/0x4ffc00000, data 0x1ff2a65/0x20e2000, compress 0x0/0x0/0x0, omap 0x2ec7e, meta 0x3d41382), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:48.687576+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 3268608 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:49.687707+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 3268608 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.970839500s of 10.000647545s, submitted: 52
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:50.687897+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350107 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105553920 unmapped: 3497984 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9da6000/0x0/0x4ffc00000, data 0x1ff6b45/0x20e6000, compress 0x0/0x0/0x0, omap 0x2ede1, meta 0x3d4121f), peers [0,2] op hist [0,0,0,0,0,1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:51.688136+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 3391488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:52.688285+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 3391488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:53.688617+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 3391488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9d7a000/0x0/0x4ffc00000, data 0x202215b/0x2112000, compress 0x0/0x0/0x0, omap 0x2ede1, meta 0x3d4121f), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:54.688932+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105857024 unmapped: 3194880 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:55.689158+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361775 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105889792 unmapped: 3162112 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9d25000/0x0/0x4ffc00000, data 0x2075e62/0x2167000, compress 0x0/0x0/0x0, omap 0x2f060, meta 0x3d40fa0), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:56.689298+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 3121152 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:57.689477+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 2744320 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:58.689746+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 2523136 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:59.689984+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 2523136 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.027631760s of 10.000502586s, submitted: 46
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:00.690285+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361939 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3014656 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:01.690548+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9cce000/0x0/0x4ffc00000, data 0x20cdd17/0x21be000, compress 0x0/0x0/0x0, omap 0x2f3fb, meta 0x3d40c05), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 2826240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:02.690738+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 2809856 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:03.690911+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 2433024 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:04.691163+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9c7d000/0x0/0x4ffc00000, data 0x211dc20/0x220f000, compress 0x0/0x0/0x0, omap 0x2f824, meta 0x3d407dc), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 2424832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:05.691320+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9c6f000/0x0/0x4ffc00000, data 0x212a9c0/0x221c000, compress 0x0/0x0/0x0, omap 0x2f8f9, meta 0x3d40707), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369473 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 2424832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:06.691495+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 3186688 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:07.691780+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 3112960 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:08.692001+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 1892352 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:09.692144+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9c12000/0x0/0x4ffc00000, data 0x218ace6/0x227a000, compress 0x0/0x0/0x0, omap 0x2faa3, meta 0x3d4055d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 1449984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.208406448s of 10.001956940s, submitted: 71
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:10.692281+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376241 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 1368064 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:11.692418+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9bad000/0x0/0x4ffc00000, data 0x21ef2fe/0x22df000, compress 0x0/0x0/0x0, omap 0x2fe3e, meta 0x3d401c2), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108797952 unmapped: 2351104 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:12.692607+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 109215744 unmapped: 1933312 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:13.692825+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 2211840 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:14.692998+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 2211840 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:15.693165+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559008765000 session 0x559008126540
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x55900887c400
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379839 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 109101056 unmapped: 2048000 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:16.693354+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 1859584 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:17.693499+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9b3b000/0x0/0x4ffc00000, data 0x22618f6/0x2351000, compress 0x0/0x0/0x0, omap 0x30383, meta 0x3d3fc7d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110370816 unmapped: 778240 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:18.693624+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110395392 unmapped: 753664 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:19.693763+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 557056 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9b0f000/0x0/0x4ffc00000, data 0x228d478/0x237d000, compress 0x0/0x0/0x0, omap 0x3049f, meta 0x3d3fb61), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.538078308s of 10.000422478s, submitted: 61
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:20.693946+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384499 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 1605632 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9af0000/0x0/0x4ffc00000, data 0x22ac7b5/0x239c000, compress 0x0/0x0/0x0, omap 0x306d7, meta 0x3d3f929), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:21.694135+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 1597440 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:22.694310+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 1556480 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:23.694664+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9ad6000/0x0/0x4ffc00000, data 0x22c679d/0x23b6000, compress 0x0/0x0/0x0, omap 0x3083a, meta 0x3d3f7c6), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 1515520 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:24.694825+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 1597440 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:25.695337+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386235 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110469120 unmapped: 2777088 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:26.695538+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9a75000/0x0/0x4ffc00000, data 0x2327bf3/0x2417000, compress 0x0/0x0/0x0, omap 0x30bd5, meta 0x3d3f42b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1638400 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:27.695733+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111968256 unmapped: 1277952 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:28.695951+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 1515520 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:29.696180+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 1515520 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.300408363s of 10.016177177s, submitted: 165
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:30.696355+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393819 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 2449408 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:31.696501+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f99c3000/0x0/0x4ffc00000, data 0x23d8d96/0x24c9000, compress 0x0/0x0/0x0, omap 0x31161, meta 0x3d3ee9f), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 2056192 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:32.696718+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 2056192 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:33.696906+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f99c3000/0x0/0x4ffc00000, data 0x23d8d96/0x24c9000, compress 0x0/0x0/0x0, omap 0x3127d, meta 0x3d3ed83), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111484928 unmapped: 2809856 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:34.697080+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 2646016 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:35.697338+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399779 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 1441792 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:36.697553+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112418816 unmapped: 2924544 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:37.697762+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f9952000/0x0/0x4ffc00000, data 0x244b70c/0x253a000, compress 0x0/0x0/0x0, omap 0x3191a, meta 0x3d3e6e6), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f9917000/0x0/0x4ffc00000, data 0x24860cb/0x2575000, compress 0x0/0x0/0x0, omap 0x31a36, meta 0x3d3e5ca), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112418816 unmapped: 2924544 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:38.697999+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112418816 unmapped: 2924544 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:39.698221+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112656384 unmapped: 2686976 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:40.698382+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f9917000/0x0/0x4ffc00000, data 0x24860cb/0x2575000, compress 0x0/0x0/0x0, omap 0x31b0b, meta 0x3d3e4f5), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.177977562s of 10.428889275s, submitted: 88
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405161 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 2506752 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:41.698520+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x24c8603/0x25b7000, compress 0x0/0x0/0x0, omap 0x31b0b, meta 0x3d3e4f5), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 2498560 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:42.698683+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 2220032 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:43.698925+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 2498560 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:44.699062+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 2449408 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:45.699237+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x250d15e/0x25fe000, compress 0x0/0x0/0x0, omap 0x31e52, meta 0x3d3e1ae), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413183 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x250d15e/0x25fe000, compress 0x0/0x0/0x0, omap 0x31fb5, meta 0x3d3e04b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 2277376 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:46.699387+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 2269184 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:47.699513+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114368512 unmapped: 974848 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:48.699664+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 2269184 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:49.699814+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9849000/0x0/0x4ffc00000, data 0x2550ee0/0x2643000, compress 0x0/0x0/0x0, omap 0x3227b, meta 0x3d3dd85), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 2260992 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:50.699975+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413527 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.277583122s of 10.388940811s, submitted: 61
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 3153920 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:51.700145+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f97f7000/0x0/0x4ffc00000, data 0x25a30f1/0x2695000, compress 0x0/0x0/0x0, omap 0x32397, meta 0x3d3dc69), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 2990080 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:52.700302+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 2990080 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:53.700480+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 2990080 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:54.700696+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 3473408 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:55.700864+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f97c7000/0x0/0x4ffc00000, data 0x25d36e4/0x26c5000, compress 0x0/0x0/0x0, omap 0x32732, meta 0x3d3d8ce), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424773 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 3407872 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:56.701025+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 3399680 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:57.701252+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 154 ms_handle_reset con 0x559009277800 session 0x559006a20e00
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 1785856 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:58.701443+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 1785856 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:59.701622+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 15
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x26478b5/0x2739000, compress 0x0/0x0/0x0, omap 0x32d93, meta 0x3d3d26d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 1679360 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:00.701794+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426763 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.372922897s of 10.013011932s, submitted: 276
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f9745000/0x0/0x4ffc00000, data 0x26522a5/0x2745000, compress 0x0/0x0/0x0, omap 0x330dd, meta 0x3d3cf23), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 3465216 heap: 118489088 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:01.701941+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115277824 unmapped: 3211264 heap: 118489088 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:02.702126+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 2023424 heap: 119537664 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:03.702324+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 1605632 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:04.702606+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 1605632 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:05.702814+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f7384000/0x0/0x4ffc00000, data 0x26d1331/0x27c6000, compress 0x0/0x0/0x0, omap 0x336ef, meta 0x607c911), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434585 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 1662976 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:06.703007+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 2736128 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:07.703158+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 2736128 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:08.703434+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 2736128 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:09.703632+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f7356000/0x0/0x4ffc00000, data 0x2701ed1/0x27f6000, compress 0x0/0x0/0x0, omap 0x3380b, meta 0x607c7f5), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118259712 unmapped: 2326528 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:10.703780+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1441097 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.071870804s of 10.087114334s, submitted: 71
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 2260992 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:11.703985+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 2260992 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:12.704129+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118448128 unmapped: 2138112 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:13.704270+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118587392 unmapped: 1998848 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:14.704441+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f72db000/0x0/0x4ffc00000, data 0x277bc0b/0x2871000, compress 0x0/0x0/0x0, omap 0x34016, meta 0x607bfea), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118538240 unmapped: 3096576 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:15.704571+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443461 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:16.704728+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:17.704869+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:18.705040+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:19.705340+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f72b5000/0x0/0x4ffc00000, data 0x279f910/0x2895000, compress 0x0/0x0/0x0, omap 0x3447c, meta 0x607bb84), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 3948544 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:20.705515+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f72a1000/0x0/0x4ffc00000, data 0x27b5ea9/0x28ab000, compress 0x0/0x0/0x0, omap 0x3447c, meta 0x607bb84), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445049 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 3948544 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:21.705678+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 3858432 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:22.705796+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 3858432 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:23.705960+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f72a1000/0x0/0x4ffc00000, data 0x27b5ea9/0x28ab000, compress 0x0/0x0/0x0, omap 0x3447c, meta 0x607bb84), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.240313530s of 12.346166611s, submitted: 51
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 3768320 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:24.706085+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 3735552 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:25.706276+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f7284000/0x0/0x4ffc00000, data 0x27cf715/0x28c6000, compress 0x0/0x0/0x0, omap 0x348ff, meta 0x607b701), peers [0,2] op hist [0,0,0,0,0,1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1449311 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 2482176 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:26.706441+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f7268000/0x0/0x4ffc00000, data 0x27ecf3f/0x28e4000, compress 0x0/0x0/0x0, omap 0x34a62, meta 0x607b59e), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 2564096 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:27.706610+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 2416640 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:28.706819+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f724e000/0x0/0x4ffc00000, data 0x2806ffd/0x28fe000, compress 0x0/0x0/0x0, omap 0x34c0c, meta 0x607b3f4), peers [0,2] op hist [0,0,0,0,1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 2285568 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:29.707033+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 2269184 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:30.707187+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452615 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 2121728 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:31.707305+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 2990080 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:32.707451+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 2990080 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:33.707638+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.959165096s of 10.444355011s, submitted: 96
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119857152 unmapped: 2826240 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:34.707791+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f71d7000/0x0/0x4ffc00000, data 0x287d6fd/0x2975000, compress 0x0/0x0/0x0, omap 0x35035, meta 0x607afcb), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119889920 unmapped: 2793472 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:35.707982+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 158 handle_osd_map epochs [158,159], i have 159, src has [1,159]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463501 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 2777088 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:36.708251+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120119296 unmapped: 2564096 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:37.708397+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7188000/0x0/0x4ffc00000, data 0x28cc40f/0x29c4000, compress 0x0/0x0/0x0, omap 0x3568c, meta 0x607a974), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120119296 unmapped: 2564096 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:38.708535+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 2555904 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:39.708659+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7183000/0x0/0x4ffc00000, data 0x28d1600/0x29c9000, compress 0x0/0x0/0x0, omap 0x3571a, meta 0x607a8e6), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 2662400 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:40.708849+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464155 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:41.709768+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 2605056 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:42.710443+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 2605056 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:43.710712+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f714d000/0x0/0x4ffc00000, data 0x2907a5f/0x29ff000, compress 0x0/0x0/0x0, omap 0x3587d, meta 0x607a783), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f714d000/0x0/0x4ffc00000, data 0x2907a5f/0x29ff000, compress 0x0/0x0/0x0, omap 0x3587d, meta 0x607a783), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:44.711124+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.447093964s of 11.187532425s, submitted: 79
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f714d000/0x0/0x4ffc00000, data 0x2907a5f/0x29ff000, compress 0x0/0x0/0x0, omap 0x3587d, meta 0x607a783), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:45.711268+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464993 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:46.711443+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:47.711907+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:48.712361+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7148000/0x0/0x4ffc00000, data 0x29094de/0x2a02000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:49.712561+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7148000/0x0/0x4ffc00000, data 0x29094de/0x2a02000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:50.712857+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:51.713199+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:52.713491+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:53.713724+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:54.713968+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:55.714175+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:56.714344+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:57.714516+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:58.714659+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:59.714831+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:00.714979+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:01.715128+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:02.715276+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:03.715535+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:04.715764+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:05.715926+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:06.716262+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:07.716554+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:08.716783+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:09.716995+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:10.717180+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:11.717362+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:12.717514+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:13.717635+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:14.717761+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:15.717897+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:16.718122+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:17.718233+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:18.718433+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:19.718568+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:20.718720+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:21.718885+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:22.719013+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 2433024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:23.719175+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 2433024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.244323730s of 39.156387329s, submitted: 15
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:24.719316+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:25.719467+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:26.719612+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466141 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:27.719775+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f711f000/0x0/0x4ffc00000, data 0x29335ed/0x2a2d000, compress 0x0/0x0/0x0, omap 0x35e49, meta 0x607a1b7), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f711f000/0x0/0x4ffc00000, data 0x29335ed/0x2a2d000, compress 0x0/0x0/0x0, omap 0x35e49, meta 0x607a1b7), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:28.719940+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 2408448 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:29.720237+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 2408448 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:30.720461+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1392640 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f70eb000/0x0/0x4ffc00000, data 0x2968543/0x2a61000, compress 0x0/0x0/0x0, omap 0x35ed7, meta 0x607a129), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:31.720593+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471213 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 1409024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x29685de/0x2a62000, compress 0x0/0x0/0x0, omap 0x35ed7, meta 0x607a129), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:32.720742+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 1409024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:33.720920+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 1409024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:34.721058+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1392640 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.987424850s of 10.797435760s, submitted: 21
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:35.721181+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1392640 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 160 handle_osd_map epochs [160,161], i have 161, src has [1,161]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:36.721352+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472851 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 1384448 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:37.721489+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x296a2dc/0x2a64000, compress 0x0/0x0/0x0, omap 0x36720, meta 0x60798e0), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:38.721623+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:39.721751+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:40.721951+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:41.722198+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471541 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x296a2dc/0x2a64000, compress 0x0/0x0/0x0, omap 0x36a2d, meta 0x60795d3), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:42.722336+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 1368064 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:43.722503+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 1368064 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:44.723170+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x296a341/0x2a64000, compress 0x0/0x0/0x0, omap 0x36d3a, meta 0x60792c6), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 1368064 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:45.723628+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 1359872 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.852377892s of 10.690566063s, submitted: 53
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:46.724071+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475847 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:47.724495+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:48.724809+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e4000/0x0/0x4ffc00000, data 0x296bec0/0x2a68000, compress 0x0/0x0/0x0, omap 0x3722f, meta 0x6078dd1), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:49.725006+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:50.725139+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e4000/0x0/0x4ffc00000, data 0x296bec0/0x2a68000, compress 0x0/0x0/0x0, omap 0x3722f, meta 0x6078dd1), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:51.725304+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475847 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:52.725585+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008765000
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:53.725977+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 16
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 1261568 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5590091bb000
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e3000/0x0/0x4ffc00000, data 0x296bfd5/0x2a69000, compress 0x0/0x0/0x0, omap 0x37583, meta 0x6078a7d), peers [0,2] op hist [0,1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:54.726323+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 1114112 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:55.726647+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 17
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 1114112 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.184496880s of 10.089957237s, submitted: 11
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:56.726951+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482063 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 1114112 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e1000/0x0/0x4ffc00000, data 0x296c1d9/0x2a6b000, compress 0x0/0x0/0x0, omap 0x37802, meta 0x60787fe), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:57.727163+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:58.727503+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:59.727732+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:00.727912+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:01.728078+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:02.728340+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:03.728614+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:04.728790+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:05.728955+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:06.729250+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:07.729568+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:08.729773+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:09.729996+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:10.730165+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:11.730286+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:12.730488+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:13.730748+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:14.730898+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:15.731202+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:16.731363+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:17.731529+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:18.731652+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:19.731862+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:20.731991+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:21.732174+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:22.732286+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:23.732430+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:24.732614+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:25.732749+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:26.732852+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:27.732918+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:28.733054+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:29.733249+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:30.733449+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:31.733641+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:32.733817+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:33.734042+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:34.734181+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:35.734282+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:36.734414+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:37.734550+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:38.734717+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:39.734883+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 04 10:53:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737604014' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:40.735010+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:41.735152+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:42.735527+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:43.735765+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:44.735891+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:45.736544+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:46.736807+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:47.736993+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:48.737174+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:49.737391+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:50.737549+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:51.737692+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:52.738785+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.612331390s of 57.233253479s, submitted: 8
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:53.740541+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 1056768 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:54.742006+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 1056768 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:55.742747+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 1056768 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:56.743079+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481585 data_alloc: 218103808 data_used: 7996
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121634816 unmapped: 1048576 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x37965, meta 0x607869b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:57.743976+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 ms_handle_reset con 0x559008765000 session 0x5590095501c0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 1277952 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:58.744290+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x37965, meta 0x607869b), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 ms_handle_reset con 0x5590091bb000 session 0x559009502700
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 1277952 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:59.744929+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 1277952 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 18
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:00.745211+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 1269760 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:01.745710+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481569 data_alloc: 218103808 data_used: 8151
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:02.746217+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:03.746612+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c2c1/0x2a6a000, compress 0x0/0x0/0x0, omap 0x37cb9, meta 0x6078347), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.706017494s of 10.825467110s, submitted: 193
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:04.746907+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:05.747473+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:06.747599+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484091 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 1130496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:07.747828+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:08.747990+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:09.748185+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296de89/0x2a6a000, compress 0x0/0x0/0x0, omap 0x3839f, meta 0x6077c61), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:10.748446+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296de89/0x2a6a000, compress 0x0/0x0/0x0, omap 0x3839f, meta 0x6077c61), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:11.748677+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482925 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:12.748846+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296de89/0x2a6a000, compress 0x0/0x0/0x0, omap 0x3839f, meta 0x6077c61), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:13.749055+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:14.749210+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.024420738s of 11.033547401s, submitted: 49
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:15.749372+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:16.749578+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486275 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:17.749794+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296f908/0x2a6d000, compress 0x0/0x0/0x0, omap 0x386a3, meta 0x607795d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:18.749944+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:19.750151+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:20.750349+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296f9a3/0x2a6e000, compress 0x0/0x0/0x0, omap 0x38806, meta 0x60777fa), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:21.750470+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296f9a3/0x2a6e000, compress 0x0/0x0/0x0, omap 0x38806, meta 0x60777fa), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1487247 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:22.750594+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:23.750741+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:24.750885+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:25.751010+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.133322716s of 11.148038864s, submitted: 15
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:26.751177+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296f9a3/0x2a6e000, compress 0x0/0x0/0x0, omap 0x38894, meta 0x607776c), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1487247 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:27.751350+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:28.751496+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:29.751673+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:30.751827+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fa3e/0x2a6f000, compress 0x0/0x0/0x0, omap 0x38acc, meta 0x6077534), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:31.752056+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488795 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:32.752268+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fa3e/0x2a6f000, compress 0x0/0x0/0x0, omap 0x38ba1, meta 0x607745f), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:33.752568+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:34.752779+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:35.752971+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:36.753167+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488221 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.623150826s of 11.003303528s, submitted: 12
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:37.753302+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296fa6d/0x2a6e000, compress 0x0/0x0/0x0, omap 0x39058, meta 0x6076fa8), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:38.753465+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296fa6d/0x2a6e000, compress 0x0/0x0/0x0, omap 0x39058, meta 0x6076fa8), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296fa6d/0x2a6e000, compress 0x0/0x0/0x0, omap 0x39058, meta 0x6076fa8), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:39.753617+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:40.753793+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:41.753945+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488397 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:42.754156+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:43.754343+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:44.754518+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dc000/0x0/0x4ffc00000, data 0x296fbd2/0x2a70000, compress 0x0/0x0/0x0, omap 0x39556, meta 0x6076aaa), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:45.754708+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:46.754882+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490089 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.575130463s of 10.002009392s, submitted: 8
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:47.755036+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:48.755210+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fc01/0x2a6f000, compress 0x0/0x0/0x0, omap 0x399c6, meta 0x607663a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:49.755410+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:50.755620+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:51.755792+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489051 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:52.755949+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:53.756212+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fc01/0x2a6f000, compress 0x0/0x0/0x0, omap 0x39cd3, meta 0x607632d), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:54.756372+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:55.756533+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fc01/0x2a6f000, compress 0x0/0x0/0x0, omap 0x39f99, meta 0x6076067), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:56.756760+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489211 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.819437981s of 10.002734184s, submitted: 15
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:57.756929+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a143, meta 0x6075ebd), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a2a6, meta 0x6075d5a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:58.757087+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a2a6, meta 0x6075d5a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:59.757259+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:00.757385+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:01.757497+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a4de, meta 0x6075b22), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489227 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:02.757606+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:03.757811+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:04.757972+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 2203648 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:05.758145+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 2203648 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:06.758358+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490887 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.297217369s of 10.052202225s, submitted: 12
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:07.758608+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fe5f/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a907, meta 0x60756f9), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:08.758768+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:09.758921+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fe5f/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3abcd, meta 0x6075433), peers [0,2] op hist [0,1])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:10.759130+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:11.759264+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 2179072 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1492913 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:12.759429+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:13.759621+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f70d9000/0x0/0x4ffc00000, data 0x2971a93/0x2a71000, compress 0x0/0x0/0x0, omap 0x3afa6, meta 0x607505a), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:14.759755+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 165 handle_osd_map epochs [165,166], i have 166, src has [1,166]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:15.759964+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:16.760162+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:17.760394+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:18.760584+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:19.760721+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:20.760918+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:21.761078+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:22.761246+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:23.761461+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:24.761626+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:25.761918+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:26.762471+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:27.762767+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:28.764918+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:29.765669+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:30.766308+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:31.767226+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:32.767875+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:33.768189+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:34.768523+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:35.769499+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:36.769783+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:37.770204+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:38.770495+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:39.771220+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:40.772127+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:41.772597+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:42.773004+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:43.773214+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:44.773348+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:45.773859+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:46.774199+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:47.774418+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:48.774683+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:49.774851+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:50.775064+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:51.775267+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:52.775461+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:53.775650+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:54.775808+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:55.776008+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:56.776191+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:57.776324+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:58.776452+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:59.776624+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:00.776747+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:01.776917+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:02.777070+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:03.777312+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:04.777497+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:05.777680+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:06.777849+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:07.777995+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:08.778167+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:09.778299+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:10.778429+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:11.778582+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:12.778706+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:13.778892+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:14.779060+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:15.779215+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:16.779348+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:17.779502+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:18.779641+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:19.779772+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:20.779913+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:21.780053+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:22.780341+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:23.780611+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:24.780769+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:25.780895+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:26.781317+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:01 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:27.781512+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:28.781687+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}'
Dec 04 10:53:01 compute-0 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 04 10:53:01 compute-0 ceph-osd[87071]: do_command 'config show' '{prefix=config show}'
Dec 04 10:53:01 compute-0 ceph-osd[87071]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}'
Dec 04 10:53:01 compute-0 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}'
Dec 04 10:53:01 compute-0 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 19
Dec 04 10:53:01 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:29.781873+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 82.034767151s of 82.260131836s, submitted: 50
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 ms_handle_reset con 0x559008cf0400 session 0x559008dbb880
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 3686400 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 10:53:01 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:30.782002+0000)
Dec 04 10:53:01 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 3604480 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:01 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 10:53:01 compute-0 ceph-osd[87071]: do_command 'log dump' '{prefix=log dump}'
Dec 04 10:53:01 compute-0 ceph-mon[75358]: from='client.14606 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:01 compute-0 ceph-mon[75358]: pgmap v1313: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:01 compute-0 ceph-mon[75358]: from='client.14610 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:01 compute-0 ceph-mon[75358]: from='client.14614 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:01 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1062305256' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 04 10:53:01 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/81841245' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 04 10:53:01 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1737604014' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 04 10:53:02 compute-0 rsyslogd[1007]: imjournal from <np0005545273:ceph-osd>: begin to drop messages due to rate-limiting
Dec 04 10:53:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:02 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:53:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 04 10:53:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3372184086' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 04 10:53:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825373307' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 04 10:53:02 compute-0 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.iwufnj(active, since 38m)
Dec 04 10:53:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 04 10:53:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792218402' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 04 10:53:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 04 10:53:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972387382' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 04 10:53:03 compute-0 ceph-mon[75358]: pgmap v1314: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3372184086' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1825373307' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 04 10:53:03 compute-0 ceph-mon[75358]: mgrmap e20: compute-0.iwufnj(active, since 38m)
Dec 04 10:53:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3792218402' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 04 10:53:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/972387382' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 04 10:53:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 04 10:53:03 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22322043' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 04 10:53:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 04 10:53:03 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411087488' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 04 10:53:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 04 10:53:03 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659670139' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 04 10:53:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 04 10:53:04 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3496892605' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 04 10:53:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/22322043' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 04 10:53:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1411087488' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 04 10:53:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1659670139' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 04 10:53:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3496892605' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 04 10:53:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 04 10:53:04 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849982030' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 04 10:53:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 04 10:53:04 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032207678' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 04 10:53:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 04 10:53:04 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1196074638' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 04 10:53:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 04 10:53:04 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1746406332' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 04 10:53:05 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3849982030' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 04 10:53:05 compute-0 ceph-mon[75358]: pgmap v1315: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:05 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2032207678' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 04 10:53:05 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1196074638' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 04 10:53:05 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1746406332' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 04 10:53:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 04 10:53:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465754681' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 04 10:53:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14648 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14650 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14652 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:06 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/465754681' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 04 10:53:06 compute-0 ceph-mon[75358]: from='client.14648 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:06 compute-0 ceph-mon[75358]: from='client.14650 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14654 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:16.951831+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1024000 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:17.952030+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:18.952203+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929484 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:19.952335+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:20.952478+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:20:50.561942+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.7 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:20:50.572610+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.7 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.004284859s of 10.015244484s, submitted: 6
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:21.952659+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 4 last_log 167 sent 165 num 4 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:20:51.576576+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.4 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:20:51.587185+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.4 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 165)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:20:50.561942+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.7 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:20:50.572610+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.7 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:22.952844+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 167)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:20:51.576576+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.4 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:20:51.587185+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.4 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:23.953010+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934310 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:24.953189+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:25.953318+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 958464 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:26.953530+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 958464 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:27.953659+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:28.953846+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:20:58.616825+0000 osd.0 (osd.0) 168 : cluster [DBG] 11.1 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:20:58.627300+0000 osd.0 (osd.0) 169 : cluster [DBG] 11.1 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 169)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:20:58.616825+0000 osd.0 (osd.0) 168 : cluster [DBG] 11.1 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:20:58.627300+0000 osd.0 (osd.0) 169 : cluster [DBG] 11.1 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936723 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:29.954031+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:20:59.607848+0000 osd.0 (osd.0) 170 : cluster [DBG] 11.4 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:20:59.618449+0000 osd.0 (osd.0) 171 : cluster [DBG] 11.4 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 933888 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 171)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:20:59.607848+0000 osd.0 (osd.0) 170 : cluster [DBG] 11.4 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:20:59.618449+0000 osd.0 (osd.0) 171 : cluster [DBG] 11.4 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:30.954210+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 933888 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:31.954352+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 933888 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:32.954487+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.027234077s of 12.038124084s, submitted: 6
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:33.954616+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:03.614699+0000 osd.0 (osd.0) 172 : cluster [DBG] 11.10 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:03.624840+0000 osd.0 (osd.0) 173 : cluster [DBG] 11.10 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941551 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 173)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:03.614699+0000 osd.0 (osd.0) 172 : cluster [DBG] 11.10 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:03.624840+0000 osd.0 (osd.0) 173 : cluster [DBG] 11.10 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:34.954768+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:04.599143+0000 osd.0 (osd.0) 174 : cluster [DBG] 10.8 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:04.609692+0000 osd.0 (osd.0) 175 : cluster [DBG] 10.8 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 175)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:04.599143+0000 osd.0 (osd.0) 174 : cluster [DBG] 10.8 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:04.609692+0000 osd.0 (osd.0) 175 : cluster [DBG] 10.8 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:35.954957+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:36.955145+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:37.955284+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:38.955408+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943964 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:39.955525+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 892928 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:40.955650+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:10.543139+0000 osd.0 (osd.0) 176 : cluster [DBG] 10.17 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:10.553674+0000 osd.0 (osd.0) 177 : cluster [DBG] 10.17 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 177)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:10.543139+0000 osd.0 (osd.0) 176 : cluster [DBG] 10.17 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:10.553674+0000 osd.0 (osd.0) 177 : cluster [DBG] 10.17 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:41.955826+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:42.955947+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:12.533961+0000 osd.0 (osd.0) 178 : cluster [DBG] 8.b scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:12.544512+0000 osd.0 (osd.0) 179 : cluster [DBG] 8.b scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 179)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:12.533961+0000 osd.0 (osd.0) 178 : cluster [DBG] 8.b scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:12.544512+0000 osd.0 (osd.0) 179 : cluster [DBG] 8.b scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:43.956094+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948790 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.731178284s of 10.857257843s, submitted: 8
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:44.956260+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:14.472037+0000 osd.0 (osd.0) 180 : cluster [DBG] 8.e scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:14.486152+0000 osd.0 (osd.0) 181 : cluster [DBG] 8.e scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:45.956470+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 4 last_log 183 sent 181 num 4 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:15.494602+0000 osd.0 (osd.0) 182 : cluster [DBG] 8.f scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:15.512271+0000 osd.0 (osd.0) 183 : cluster [DBG] 8.f scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 181)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:14.472037+0000 osd.0 (osd.0) 180 : cluster [DBG] 8.e scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:14.486152+0000 osd.0 (osd.0) 181 : cluster [DBG] 8.e scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 819200 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:46.956865+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 183)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:15.494602+0000 osd.0 (osd.0) 182 : cluster [DBG] 8.f scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:15.512271+0000 osd.0 (osd.0) 183 : cluster [DBG] 8.f scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 811008 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:47.957065+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 811008 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:48.957232+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:18.524388+0000 osd.0 (osd.0) 184 : cluster [DBG] 10.e scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:18.538515+0000 osd.0 (osd.0) 185 : cluster [DBG] 10.e scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956025 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 185)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:18.524388+0000 osd.0 (osd.0) 184 : cluster [DBG] 10.e scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:18.538515+0000 osd.0 (osd.0) 185 : cluster [DBG] 10.e scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:49.957443+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:50.957586+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:51.957710+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:21.604260+0000 osd.0 (osd.0) 186 : cluster [DBG] 10.d scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:21.617972+0000 osd.0 (osd.0) 187 : cluster [DBG] 10.d scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 794624 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 187)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:21.604260+0000 osd.0 (osd.0) 186 : cluster [DBG] 10.d scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:21.617972+0000 osd.0 (osd.0) 187 : cluster [DBG] 10.d scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:52.957902+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:53.958168+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 770048 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958438 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:54.958341+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 770048 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:55.958471+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 770048 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.146542549s of 12.161815643s, submitted: 8
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:56.958628+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:26.633943+0000 osd.0 (osd.0) 188 : cluster [DBG] 10.15 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:26.648093+0000 osd.0 (osd.0) 189 : cluster [DBG] 10.15 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 189)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:26.633943+0000 osd.0 (osd.0) 188 : cluster [DBG] 10.15 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:26.648093+0000 osd.0 (osd.0) 189 : cluster [DBG] 10.15 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:57.958809+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 712704 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:58.958970+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960853 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:20:59.959113+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:00.959257+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:30.502357+0000 osd.0 (osd.0) 190 : cluster [DBG] 8.6 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:30.516484+0000 osd.0 (osd.0) 191 : cluster [DBG] 8.6 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 191)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:30.502357+0000 osd.0 (osd.0) 190 : cluster [DBG] 8.6 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:30.516484+0000 osd.0 (osd.0) 191 : cluster [DBG] 8.6 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:01.959474+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:02.959610+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:03.959735+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:33.569442+0000 osd.0 (osd.0) 192 : cluster [DBG] 10.9 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:33.583734+0000 osd.0 (osd.0) 193 : cluster [DBG] 10.9 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965677 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 193)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:33.569442+0000 osd.0 (osd.0) 192 : cluster [DBG] 10.9 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:33.583734+0000 osd.0 (osd.0) 193 : cluster [DBG] 10.9 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:04.959935+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:34.580677+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.1c scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:34.622990+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.1c scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 688128 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:05.960139+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 195)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:34.580677+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.1c scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:34.622990+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.1c scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 688128 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.120763779s of 10.135630608s, submitted: 8
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:06.960301+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:36.769506+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.1b scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:36.790683+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.1b scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:07.960475+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 197)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:36.769506+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.1b scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:36.790683+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.1b scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:08.960673+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970503 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:09.960838+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:10.961010+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:40.736934+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1d scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:40.765163+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1d scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 199)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:40.736934+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1d scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:40.765163+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1d scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:11.961182+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:12.961317+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 647168 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:13.961476+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 638976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972916 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:14.961605+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 638976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:15.961769+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 638976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:16.961967+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:17.962556+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:18.962703+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 622592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972916 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.841277122s of 12.850917816s, submitted: 4
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:19.962894+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:49.620687+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.3 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:49.659511+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.3 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 201)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:49.620687+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.3 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:49.659511+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.3 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:20.963135+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:21.963267+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:51.625532+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:51.668042+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 203)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:51.625532+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:51.668042+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 589824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:22.963424+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:52.620110+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.d scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:52.659043+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.d scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 205)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:52.620110+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.d scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:52.659043+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.d scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:23.963676+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980149 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:24.963852+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:25.963974+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:26.964163+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:56.740882+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.9 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:56.769222+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.9 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 207)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:56.740882+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.9 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:56.769222+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.9 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:27.964396+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:28.964519+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:58.793968+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.16 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:21:58.818704+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.16 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 209)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:58.793968+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.16 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:21:58.818704+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.16 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984973 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:29.964710+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.012728691s of 11.165803909s, submitted: 10
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:30.964879+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:22:00.786425+0000 osd.0 (osd.0) 210 : cluster [DBG] 9.b scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:22:00.811035+0000 osd.0 (osd.0) 211 : cluster [DBG] 9.b scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 211)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:22:00.786425+0000 osd.0 (osd.0) 210 : cluster [DBG] 9.b scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:22:00.811035+0000 osd.0 (osd.0) 211 : cluster [DBG] 9.b scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:31.965069+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:32.965245+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 466944 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:33.965420+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 458752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987384 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:34.965545+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 458752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:35.965680+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:22:05.806209+0000 osd.0 (osd.0) 212 : cluster [DBG] 9.5 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:22:05.848594+0000 osd.0 (osd.0) 213 : cluster [DBG] 9.5 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 213)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:22:05.806209+0000 osd.0 (osd.0) 212 : cluster [DBG] 9.5 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:22:05.848594+0000 osd.0 (osd.0) 213 : cluster [DBG] 9.5 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 458752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:36.965914+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 450560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:37.966143+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 450560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:38.966260+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 450560 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989795 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:39.966399+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 442368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:40.966554+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 442368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:41.966714+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:42.966860+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:43.966994+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989795 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:44.967149+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:45.967315+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:46.967502+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.966968536s of 16.978771210s, submitted: 4
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:47.967677+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:22:17.765256+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.11 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:22:17.800487+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.11 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 215)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:22:17.765256+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.11 scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:22:17.800487+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.11 scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:48.967926+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992208 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:49.968066+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  log_queue is 2 last_log 217 sent 215 num 2 unsent 2 sending 2
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:22:19.760711+0000 osd.0 (osd.0) 216 : cluster [DBG] 9.1e scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  will send 2025-12-04T10:22:19.792506+0000 osd.0 (osd.0) 217 : cluster [DBG] 9.1e scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client handle_log_ack log(last 217)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:22:19.760711+0000 osd.0 (osd.0) 216 : cluster [DBG] 9.1e scrub starts
Dec 04 10:53:06 compute-0 ceph-osd[86021]: log_client  logged 2025-12-04T10:22:19.792506+0000 osd.0 (osd.0) 217 : cluster [DBG] 9.1e scrub ok
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:50.968329+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:51.968473+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:52.968643+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:53.968800+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:54.968948+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:55.969138+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:56.969308+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:57.969448+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:58.969544+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:21:59.969689+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:00.969813+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:01.969966+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:02.970122+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:03.970355+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:04.970463+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:05.970694+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:06.970905+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:07.971030+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:08.971191+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:09.971324+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:10.971454+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:11.971619+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:12.971783+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 303104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:13.971902+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 303104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:14.972238+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 303104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:15.972390+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:16.972565+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:17.972715+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:18.972884+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:19.973038+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:20.973200+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 270336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:21.973342+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 270336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:22.973491+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 270336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:23.973616+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:24.973763+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:25.973890+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:26.974091+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:27.974257+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:28.974402+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:29.974569+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:30.974704+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:31.974876+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:32.975063+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:33.975168+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:34.975284+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:35.975424+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:36.975621+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:37.975777+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:38.975906+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:39.976067+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:40.976207+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:41.976341+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:42.976469+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:43.976631+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:44.976783+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:45.976971+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:46.977193+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:47.977376+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:48.977509+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:49.977638+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:50.977789+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:51.977956+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:52.978126+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:53.978271+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:54.978425+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:55.978555+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:56.978731+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 122880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:57.978926+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 98304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:58.979077+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:22:59.979312+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:00.979447+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 98304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:01.979588+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:02.979742+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:03.979880+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 81920 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:04.980045+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 81920 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:05.980172+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:06.980333+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:07.980466+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:08.980597+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:09.980744+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:10.980891+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:11.981029+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 49152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:12.981186+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:13.981308+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:14.981429+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:15.981587+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:16.981766+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 16384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:17.981917+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 8192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:18.982124+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:19.982280+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:20.982436+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:21.982546+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:22.982727+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:23.982887+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:24.983140+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:25.983325+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:26.983527+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:27.983691+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:28.983935+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:29.984170+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:30.984418+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1024000 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:31.984629+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:32.984766+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:33.984907+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:34.984993+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:35.985193+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:36.985369+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 983040 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:37.985504+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 983040 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:38.985839+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 983040 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:39.986051+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 974848 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:40.986398+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 974848 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:41.986663+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 966656 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:42.986868+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 966656 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:43.987192+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 966656 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:44.987390+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 958464 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:45.987585+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 958464 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:46.987790+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 950272 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:47.987962+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 950272 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:48.988161+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 950272 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:49.988314+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 942080 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:50.988524+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 942080 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:51.988683+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 933888 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:52.988876+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:53.989076+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:54.989242+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:55.989412+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:56.989596+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:57.989723+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:58.989860+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:23:59.990004+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:00.990162+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:01.990401+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 876544 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:02.990567+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 876544 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:03.990709+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 876544 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:04.990867+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 868352 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:05.991066+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 868352 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:06.991305+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 868352 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:07.991441+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:08.991608+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:09.991689+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 851968 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:10.991928+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 851968 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:11.992075+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 851968 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:12.992284+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:13.992405+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:14.992537+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:15.992673+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:16.992916+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:17.993082+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:18.994047+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:19.994195+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:20.994561+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:21.994742+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:22.994882+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:23.995018+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:24.995155+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:25.995298+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:26.995586+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:27.995711+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:28.995843+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:29.996007+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:30.996158+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:31.996322+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:32.996457+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:33.997061+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:34.997383+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:35.997502+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:36.997716+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:37.997895+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 737280 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:38.998065+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 737280 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:39.998171+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 729088 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:40.998345+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 729088 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:41.998478+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 729088 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:42.999489+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:43.999722+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:45.000013+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:46.000176+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:47.000398+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:48.000523+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:49.000678+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:50.000832+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:51.000971+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:52.001117+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5444 writes, 23K keys, 5444 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5444 writes, 791 syncs, 6.88 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5444 writes, 23K keys, 5444 commit groups, 1.0 writes per commit group, ingest: 18.49 MB, 0.03 MB/s
                                           Interval WAL: 5444 writes, 791 syncs, 6.88 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:53.001256+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:54.001374+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:55.001504+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:56.001667+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:57.001909+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:58.002078+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:24:59.002286+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:00.002418+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:01.002578+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 565248 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:02.002748+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 565248 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:03.002942+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:04.003136+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:05.003282+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:06.003423+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:07.003589+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:08.003733+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 540672 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:09.003863+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 540672 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:10.003965+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 540672 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:11.004072+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 532480 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:12.004161+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 532480 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:13.004283+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 532480 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:14.004406+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 524288 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:15.004543+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 524288 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:16.004663+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 507904 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:17.004880+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 507904 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:18.005015+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:19.005145+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:20.005270+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:21.005403+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:22.005522+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:23.005633+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:24.005782+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 483328 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:25.005916+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 483328 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:26.006137+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:27.006339+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:28.006506+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 458752 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:29.006662+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 458752 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:30.006821+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 458752 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:31.006955+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 450560 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:32.007130+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 450560 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:33.007299+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 442368 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:34.007395+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 442368 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:35.007527+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 442368 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:36.007704+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 425984 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:37.007937+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 425984 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:38.008142+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 425984 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:39.008359+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 417792 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:40.008520+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 417792 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:41.008664+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 409600 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:42.008791+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 409600 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:43.008932+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 401408 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:44.009078+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 401408 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:45.009246+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 401408 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:46.009387+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 393216 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:47.009567+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 393216 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:48.009720+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 393216 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:49.009863+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 385024 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:50.009993+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 385024 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:51.010133+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 376832 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:52.010275+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 376832 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:53.010546+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 376832 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:54.010668+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 368640 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:55.010786+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 368640 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:56.010925+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 368640 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:57.011170+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 360448 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:58.011349+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 360448 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:25:59.011474+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 352256 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:00.011599+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 352256 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:01.011723+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 352256 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:02.011884+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 344064 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:03.012024+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 344064 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:04.012187+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 344064 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:05.012350+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 335872 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:06.012516+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 327680 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:07.012641+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 319488 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:08.012890+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 319488 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:09.013014+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 311296 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:10.013148+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 311296 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:11.013336+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 303104 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:12.013521+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 294912 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:13.013650+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 294912 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:14.013772+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 294912 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:15.013959+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 286720 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:16.014126+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 286720 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:17.014301+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 286720 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:18.014451+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 278528 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:19.014593+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 278528 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:20.014769+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 270336 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:21.014889+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 270336 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:22.015037+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 262144 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:23.015172+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 275.391906738s of 275.403167725s, submitted: 4
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 180224 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [1])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:24.015279+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1843200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:25.015420+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1843200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:26.015562+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1843200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:27.015734+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1843200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:28.015868+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1835008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:29.016014+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1818624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:30.016153+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1818624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:31.016285+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1818624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:32.016424+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1818624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:33.016546+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1818624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:34.016683+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1802240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:35.016830+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1802240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:36.016949+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1802240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:37.017162+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 1794048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:38.017322+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 1794048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:39.017462+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 1785856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:40.017573+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 1785856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:41.017676+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 1777664 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:42.017828+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 1769472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:43.017963+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 1769472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:44.018156+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 1761280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:45.018294+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 1761280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:46.018425+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 1744896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:47.018669+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 1744896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:48.018834+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 1744896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:49.018976+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 1736704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:50.019180+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 1736704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:51.019311+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 1728512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:52.019442+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 1728512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:53.019578+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1712128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:54.019701+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1712128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:55.019813+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1712128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:56.019896+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1712128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:57.020067+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1712128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:58.020289+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1695744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:26:59.020510+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1695744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:00.021177+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1695744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:01.021313+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:02.021479+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:03.021634+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:04.021773+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:05.021905+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:06.022056+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:07.022250+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:08.022403+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:09.022539+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:10.022717+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:11.022941+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:12.023120+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:13.023252+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:14.023412+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:15.023789+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:16.023953+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:17.024150+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:18.024295+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:19.024562+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:20.024687+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:21.024877+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:22.025146+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:23.025277+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:24.025439+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:25.025630+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1679360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:26.025843+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1671168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:27.026049+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1671168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:28.026194+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1671168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:29.026360+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1671168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:30.026529+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1671168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:31.026609+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:32.026746+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:33.026876+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:34.027018+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:35.027152+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:36.027276+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:37.027454+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:38.027600+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:39.027798+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:40.028002+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1646592 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:41.028165+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1638400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:42.028306+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1638400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:43.028444+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1638400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:44.028620+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1638400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:45.028770+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1638400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:46.028943+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:47.029127+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:48.029249+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:49.029397+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:50.029521+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:51.029683+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:52.029852+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:53.030034+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:54.031205+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:55.031384+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:56.031517+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:57.031812+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:58.031950+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1622016 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:27:59.032113+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1622016 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:00.032275+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1622016 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:01.032462+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1622016 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:02.032585+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1622016 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:03.032699+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1613824 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:04.033083+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1613824 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:05.033229+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1613824 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:06.033358+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1605632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:07.033540+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1605632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:08.033671+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1605632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:09.033808+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1605632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:10.033970+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:11.034136+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:12.034525+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:13.034758+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:14.034987+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:15.035150+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:16.035296+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:17.035466+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:18.035636+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:19.035745+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:20.035893+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:21.036018+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:22.036169+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:23.036333+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:24.036482+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:25.036674+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:26.036796+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:27.036961+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:28.037143+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:29.037275+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:30.037450+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:31.037564+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:32.037695+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:33.037900+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:34.038010+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:35.038171+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:36.038320+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:37.038484+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:38.038637+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:39.038765+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:40.038943+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:41.039089+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:42.039258+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:43.039397+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:44.039543+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:45.039723+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:46.039902+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:47.040108+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:48.040337+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:49.040488+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:50.040620+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:51.040782+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:52.040972+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:53.041145+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:54.041299+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:55.041539+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:56.041827+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:57.042026+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:58.042183+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:59.042299+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:00.042431+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:01.042758+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:02.042998+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:03.043142+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:04.043282+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:05.043388+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:06.043628+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:07.043882+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:08.044010+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:09.044177+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:10.044307+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:11.044485+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:12.044635+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:13.044816+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:14.044993+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:15.045167+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:16.045291+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:17.045454+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:18.045595+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:19.045741+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:20.045882+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:21.046044+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:22.046171+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:23.046319+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:24.046443+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:25.046562+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:26.046682+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:27.047155+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:28.047273+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:29.047415+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:30.047550+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:31.047700+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:32.047882+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:33.048041+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:34.048171+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:35.048337+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:36.048480+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:37.049061+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:38.049240+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:39.049368+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:40.049538+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:41.049812+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:42.049921+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:43.050146+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:44.050285+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:45.050389+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:46.050500+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:47.050670+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:48.050809+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:49.050967+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:50.051153+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:51.051359+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:52.051518+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:53.051647+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:54.051794+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:55.051937+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:56.052061+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1409024 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:57.052310+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1409024 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:58.052426+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:59.052553+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:00.052816+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:01.052997+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:02.053241+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:03.053445+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:04.053590+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:05.053721+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:06.053872+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:07.054038+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc ms_handle_reset ms_handle_reset con 0x561162cce000
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: get_auth_request con 0x561165051800 auth_method 0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_configure stats_period=5
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:08.054203+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:09.054413+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:10.054567+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:11.054690+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:12.054821+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:13.054972+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 ms_handle_reset con 0x561162552400 session 0x5611613c9340
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165051c00
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:14.055120+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:15.055303+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:16.055455+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:17.055651+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:18.055756+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:19.055875+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:20.056056+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:21.056202+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:22.056363+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:23.056503+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:24.056646+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:25.056784+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:26.056926+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:27.057156+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:28.057311+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:29.057470+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:30.057595+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:31.057757+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:32.057984+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:33.058186+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:34.058355+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:35.058538+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:36.058723+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:37.058966+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:38.059188+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:39.059366+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:40.059499+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:41.059639+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:42.059781+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:43.060252+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:44.060405+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:45.060533+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:46.060644+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:47.060799+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:48.060922+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:49.061094+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:50.061335+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:51.061483+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:52.061608+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:53.061754+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:54.061865+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:55.061995+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:56.062139+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:57.062292+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:58.062435+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:59.062614+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:00.062760+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:01.062880+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:02.063051+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:03.063183+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:04.063317+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:05.063447+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:06.063607+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:07.063873+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:08.064028+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:09.064182+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:10.064324+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:11.064452+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:12.064572+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:13.064718+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:14.064872+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:15.065065+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:16.065231+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:17.065445+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:18.065634+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:19.065816+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:20.065975+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:21.066136+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:22.066270+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:23.066573+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.918304443s of 300.103698730s, submitted: 106
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561163aeb000
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:24.066746+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:25.066917+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:26.067177+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:27.067396+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:28.067545+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:29.067672+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:30.067843+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:31.067972+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:32.068185+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:33.068347+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:34.068496+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:35.068640+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:36.068926+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:37.069163+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:38.069318+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:39.069496+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:40.069686+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:41.069817+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:42.069971+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:43.070159+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:44.070367+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:45.070592+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:46.070789+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:47.070964+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:48.071111+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:49.071249+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:50.071396+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:51.071537+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:52.071728+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:53.071892+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:54.072243+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:55.072457+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:56.072650+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:57.072868+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:58.073037+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:59.073147+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:00.073275+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:01.073402+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:02.073535+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:03.073731+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:04.073873+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:05.074071+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:06.074248+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:07.074487+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:08.074655+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:09.074783+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:10.074932+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:11.075081+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:12.075308+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:13.075547+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:14.075733+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:15.075890+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:16.076033+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:17.076220+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:18.076395+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:19.076542+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:20.076728+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:21.076959+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:22.077153+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:23.077455+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:24.077660+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:25.077803+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:26.077961+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:27.078160+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:28.078316+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:29.078446+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:30.078650+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:31.078824+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:32.078985+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:33.079192+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:34.079442+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:35.079582+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:36.079720+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:37.080053+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:38.080140+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:39.080263+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:40.080426+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:41.080618+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:42.080782+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:43.081068+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:44.081335+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:45.081517+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:46.081694+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:47.081932+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:48.082105+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:49.082327+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:50.082504+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:51.082735+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:52.082915+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:53.083077+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:54.083289+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:55.083498+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:56.083697+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:57.084018+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:58.084250+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:59.084441+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:00.084664+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:01.084929+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:02.085173+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:03.085362+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:04.085614+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:05.085904+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:06.086175+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:07.086411+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:08.086650+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:09.086876+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:10.087188+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:11.087499+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:12.087847+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:13.088126+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:14.088520+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:15.088726+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:16.088938+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:17.089228+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:18.089527+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:19.089767+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:20.089951+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:21.090231+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:22.090451+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:23.090660+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:24.090854+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:25.091030+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:26.091258+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:27.091483+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:28.091660+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:29.091877+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:30.092164+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:31.092371+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:32.092538+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:33.092710+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:34.092931+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:35.093184+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:36.093368+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:37.093658+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:38.093912+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:39.094148+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:40.094370+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:41.094564+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:42.094744+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:43.094896+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:44.095120+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:45.095317+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:46.095526+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:47.095712+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:48.095889+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:49.096049+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:50.096268+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:51.096481+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:52.096673+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:53.096992+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:54.097267+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:55.097512+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:56.097740+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:57.097955+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:58.098302+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:59.098553+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:00.098916+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:01.099237+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:02.099440+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:03.099629+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:04.099859+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:05.100061+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:06.100380+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:07.100623+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:08.100843+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:09.101078+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:10.101299+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:11.101484+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:12.101726+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:13.102182+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:14.102765+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:15.102966+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:16.103219+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:17.103552+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:18.103723+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:19.103900+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:20.104071+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread fragmentation_score=0.000116 took=0.000017s
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:21.104304+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:22.104485+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:23.104663+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:24.104857+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:25.105041+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:26.105214+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:27.105540+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:28.105728+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:29.105928+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:30.106136+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:31.106319+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:32.106514+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:33.106688+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:34.107038+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:35.107268+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:36.107452+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:37.107755+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:38.107940+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:39.108216+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:40.108580+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:41.108756+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:42.108936+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:43.109128+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:44.109351+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:45.109585+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:46.109820+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:47.110252+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:48.110818+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:49.111309+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:50.111808+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:51.112012+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:52.112346+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5692 writes, 24K keys, 5692 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5692 writes, 915 syncs, 6.22 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:53.112792+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:54.112973+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:55.113152+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:56.113400+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:57.113558+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:58.113708+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:59.113855+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:00.113988+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:01.114247+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:02.114420+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:03.114639+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:04.114851+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:05.115062+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:06.115271+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:07.115502+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:08.115718+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:09.115975+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:10.116174+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:11.423808+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:12.423985+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:13.424174+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:14.424366+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:15.424522+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:16.424663+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:17.424874+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:18.425019+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:19.425138+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:20.425276+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:21.425439+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:22.425582+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:23.425711+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:24.425841+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:25.425983+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:26.426136+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:27.426336+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:28.426510+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:29.426703+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:30.426866+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:31.427035+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:32.427165+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:33.427274+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:34.427401+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:35.427523+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:36.427662+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:37.427811+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:38.427986+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:39.428302+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:40.428495+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:41.428663+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:42.428867+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:43.429009+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:44.429171+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:45.429296+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:46.429791+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:47.429928+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:48.430366+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:49.430529+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:50.430678+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:51.430823+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:52.431054+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:53.431248+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:54.431669+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:55.431818+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:56.432052+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:57.432348+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:58.432494+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:59.432607+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:00.432766+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:01.432950+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:02.433210+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:03.433397+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:04.433551+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:05.433758+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:06.434005+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:07.434430+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:08.434601+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:09.434785+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:10.435020+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:11.435290+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:12.435508+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:13.435661+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:14.435819+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:15.436063+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:16.436250+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:17.436424+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:18.436588+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:19.436782+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:20.436909+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:21.437018+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:22.437135+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.937805176s of 299.970245361s, submitted: 18
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:23.437243+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:24.437370+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:25.437495+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:26.437619+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:27.437779+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:28.437911+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:29.437975+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:30.438138+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:31.438321+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:32.438558+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:33.438680+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:34.438844+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:35.438981+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:36.439119+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:37.439262+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:38.439395+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:39.439560+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:40.439717+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:41.439939+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:42.440094+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:43.440296+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:44.440425+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:45.440578+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:46.440714+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:47.440881+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:48.440996+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:49.441155+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:50.441321+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:51.441701+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:52.441854+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:53.442018+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:54.442195+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:55.442379+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:56.442603+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:57.442827+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:58.443028+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:59.443172+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:00.443337+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:01.443494+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:02.443757+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:03.443953+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:04.444162+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:05.444524+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:06.444721+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:07.444910+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:08.445061+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:09.445222+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:10.445349+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:11.445497+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:12.445685+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:13.445873+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:14.446019+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:15.446180+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:16.446309+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:17.446487+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:18.446698+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:19.446850+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:20.447065+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:21.447266+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:22.447444+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:23.447616+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:24.478660+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:25.478842+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:26.479030+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:27.479253+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:28.479379+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:29.479502+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:30.479649+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:31.479802+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:32.479922+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:33.480069+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:34.480222+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:35.480329+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:36.480468+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:37.480685+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:38.480790+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:39.480931+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:40.481023+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:41.481186+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:42.481334+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:43.481489+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:44.481646+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:45.481794+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:46.481941+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:47.482202+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:48.482348+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:49.482492+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:50.482659+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:51.482826+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:52.482965+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:53.483238+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:54.483377+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:55.483492+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:56.483660+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:57.483829+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:58.483997+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:59.484178+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:00.484312+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:01.484591+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:02.484755+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:03.485006+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:04.486062+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:05.486623+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:06.487492+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:07.488016+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:08.488267+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:09.488394+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:10.489009+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:11.489283+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:12.489598+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:13.489814+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:14.490205+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:15.490617+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:16.490990+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:17.491345+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:18.491647+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:19.491933+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:20.492254+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 117 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 117.103752136s of 117.262046814s, submitted: 106
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165780400
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 98304 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:21.492583+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 120 ms_handle_reset con 0x561165780400 session 0x5611655cd880
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:22.492821+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 16867328 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048276 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165780c00
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:23.492962+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 16711680 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 121 ms_handle_reset con 0x561165780c00 session 0x5611653556c0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:24.493146+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:25.493419+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fc244000/0x0/0x4ffc00000, data 0xd1ffb0/0xde4000, compress 0x0/0x0/0x0, omap 0x11804, meta 0x2bbe7fc), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:26.493661+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:27.493920+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076723 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:28.494123+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:29.494341+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:30.494483+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:31.494697+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:32.494912+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:33.495163+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:34.495315+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:35.495456+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:36.495595+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:37.495835+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:38.496018+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:39.496177+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:40.496370+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:41.496607+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:42.496812+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:43.497089+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 10
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:44.497344+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:45.497545+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:46.497743+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:47.498369+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:48.498565+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:49.498773+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:50.499075+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:51.499308+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 16818176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:52.499510+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 16818176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:53.499739+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 16818176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 11
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:54.499953+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:55.500151+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.303222656s of 35.438919067s, submitted: 47
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:56.500428+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:57.500732+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc240000/0x0/0x4ffc00000, data 0xd23634/0xdea000, compress 0x0/0x0/0x0, omap 0x11dc1, meta 0x2bbe23f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081519 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:58.500907+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:59.501048+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:00.501246+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:01.501391+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:02.501678+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081519 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:03.501935+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc240000/0x0/0x4ffc00000, data 0xd23634/0xdea000, compress 0x0/0x0/0x0, omap 0x11dc1, meta 0x2bbe23f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc240000/0x0/0x4ffc00000, data 0xd23634/0xdea000, compress 0x0/0x0/0x0, omap 0x11dc1, meta 0x2bbe23f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:04.502163+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:05.502377+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:06.502576+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:07.502775+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084293 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:08.513285+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:09.513512+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:10.513710+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:11.513846+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:12.514066+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084293 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:13.514253+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:14.514442+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:15.514671+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:16.514818+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:17.515054+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084293 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:18.515238+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:19.515429+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:20.515555+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:21.515698+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165781000
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.509790421s of 25.738904953s, submitted: 31
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:22.515835+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085985 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:23.515971+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23c000/0x0/0x4ffc00000, data 0xd2514e/0xdee000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:24.516181+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:25.516335+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:26.516453+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:27.516628+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc239000/0x0/0x4ffc00000, data 0xd26d53/0xdf1000, compress 0x0/0x0/0x0, omap 0x12311, meta 0x2bbdcef), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088615 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:28.516876+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:29.517261+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc239000/0x0/0x4ffc00000, data 0xd26d53/0xdf1000, compress 0x0/0x0/0x0, omap 0x12311, meta 0x2bbdcef), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:30.517436+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:31.517708+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:32.517942+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088615 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:33.518131+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.492445946s of 12.550980568s, submitted: 38
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:34.518331+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:35.518473+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:36.518617+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:37.518850+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091389 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:38.519176+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:39.519394+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:40.519657+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:41.519791+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:42.520192+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091389 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:43.520372+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:44.520899+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:45.521061+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:46.521202+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:47.521411+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091389 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:48.521541+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:49.521774+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:50.521998+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:51.522178+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.262409210s of 18.273027420s, submitted: 11
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:52.522306+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 16728064 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092361 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 12
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:53.522481+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd2886d/0xdf5000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x12836, meta 0x2bbd7ca), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:54.522704+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:55.523032+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:56.523181+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:57.523383+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095743 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc233000/0x0/0x4ffc00000, data 0xd2a3d7/0xdf7000, compress 0x0/0x0/0x0, omap 0x12af6, meta 0x2bbd50a), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:58.523564+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 15613952 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:59.523758+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc233000/0x0/0x4ffc00000, data 0xd2a50d/0xdf9000, compress 0x0/0x0/0x0, omap 0x12af6, meta 0x2bbd50a), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 15613952 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:00.523925+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 15613952 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc22e000/0x0/0x4ffc00000, data 0xd2c152/0xdfc000, compress 0x0/0x0/0x0, omap 0x12db8, meta 0x2bbd248), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:01.524064+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 15605760 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:02.524304+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 14499840 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:03.524476+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104979 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc229000/0x0/0x4ffc00000, data 0xd2dda7/0xdff000, compress 0x0/0x0/0x0, omap 0x13240, meta 0x2bbcdc0), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.146455765s of 11.366581917s, submitted: 72
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 14491648 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:04.524683+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc22a000/0x0/0x4ffc00000, data 0xd2dd0c/0xdfe000, compress 0x0/0x0/0x0, omap 0x13240, meta 0x2bbcdc0), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 13410304 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:05.524835+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 13344768 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 130 handle_osd_map epochs [131,132], i have 130, src has [1,132]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:06.525018+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 13246464 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:07.525224+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:08.525677+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117249 data_alloc: 218103808 data_used: 4361
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc220000/0x0/0x4ffc00000, data 0xd34ce6/0xe0a000, compress 0x0/0x0/0x0, omap 0x13a95, meta 0x2bbc56b), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:09.525881+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 133 handle_osd_map epochs [134,135], i have 133, src has [1,135]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:10.526038+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 13238272 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc21a000/0x0/0x4ffc00000, data 0xd38436/0xe10000, compress 0x0/0x0/0x0, omap 0x13d70, meta 0x2bbc290), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:11.526183+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 13238272 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:12.526423+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 13238272 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:13.526620+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122177 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc21c000/0x0/0x4ffc00000, data 0xd38300/0xe0e000, compress 0x0/0x0/0x0, omap 0x13d70, meta 0x2bbc290), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.265845299s of 10.467185020s, submitted: 143
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:14.526840+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:15.527186+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:16.527428+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:17.527657+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39dff/0xe11000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:18.527846+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124519 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:19.528051+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:20.528243+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39dff/0xe11000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:21.528443+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:22.528617+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:23.528776+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124519 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.179412842s of 10.185050011s, submitted: 10
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:24.528901+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39dff/0xe11000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:25.529033+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:26.529229+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:27.529461+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:28.529835+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126211 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:29.530038+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:30.530202+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:31.530408+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:32.530564+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:33.530745+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126211 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:34.530928+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:35.531051+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:36.531205+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:37.531375+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:38.531518+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126211 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:39.531713+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:40.531864+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:41.532019+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:42.532226+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.345500946s of 18.347776413s, submitted: 1
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:43.532384+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127183 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:44.532510+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:45.532645+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39f35/0xe13000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:46.532775+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:47.532925+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:48.533057+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127039 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:49.533213+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:50.533363+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:51.533522+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39f35/0xe13000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:52.533685+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:53.533850+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127039 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.034024239s of 11.041786194s, submitted: 3
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:54.534006+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:55.534164+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:56.534361+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:57.534538+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 12779520 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:58.534686+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125347 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 12779520 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 13
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:59.534876+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 12722176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:00.534992+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 12722176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc21a000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:01.535168+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 12722176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:02.535319+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 12722176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:03.535465+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125347 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc21a000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.244282722s of 10.261468887s, submitted: 135
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 12713984 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:04.535643+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 12713984 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:05.535786+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 12451840 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:06.535927+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 12451840 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:07.536149+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc20a000/0x0/0x4ffc00000, data 0xd48ce3/0xe22000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 12443648 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:08.536299+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131365 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 12001280 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:09.536527+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 11919360 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:10.536673+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 10600448 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:11.536879+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 10264576 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:12.537032+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc1d2000/0x0/0x4ffc00000, data 0xd7ff44/0xe5a000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 10264576 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:13.537220+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137373 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 10149888 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:14.537373+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 10149888 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:15.537517+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.977731705s of 11.395256042s, submitted: 22
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 10108928 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:16.537660+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 10108928 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:17.537826+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 10108928 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:18.537976+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135945 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc1cf000/0x0/0x4ffc00000, data 0xd82fff/0xe5d000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 10305536 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:19.538172+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 10305536 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:20.538364+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 10256384 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:21.538523+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 9994240 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:22.538715+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc19b000/0x0/0x4ffc00000, data 0xdb8c69/0xe91000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 9805824 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:23.538879+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137531 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 9609216 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc17e000/0x0/0x4ffc00000, data 0xdd5d75/0xeae000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:24.539041+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 9609216 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:25.539211+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.382287979s of 10.000560760s, submitted: 29
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 9969664 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:26.539357+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc15b000/0x0/0x4ffc00000, data 0xdf8a8e/0xed1000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 8568832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:27.539508+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 7479296 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:28.539664+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140175 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 7315456 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:29.539830+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf95000/0x0/0x4ffc00000, data 0xe1eb45/0xef7000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87056384 unmapped: 6266880 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:30.539981+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf7c000/0x0/0x4ffc00000, data 0xe3717a/0xf10000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 5857280 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:31.540164+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87678976 unmapped: 5644288 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:32.540330+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87400448 unmapped: 5922816 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:33.540465+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140755 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87523328 unmapped: 5799936 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:34.540594+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 5865472 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf55000/0x0/0x4ffc00000, data 0xe5e360/0xf37000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:35.540762+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.452430725s of 10.001968384s, submitted: 39
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 5865472 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf3b000/0x0/0x4ffc00000, data 0xe78260/0xf51000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:36.540884+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 5865472 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:37.541147+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 5382144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:38.541340+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144413 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf1e000/0x0/0x4ffc00000, data 0xe96c80/0xf6e000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87998464 unmapped: 5324800 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:39.541512+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88006656 unmapped: 5316608 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:40.541765+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 5562368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:41.541913+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 5562368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:42.542074+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 137 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xec381d/0xf9c000, compress 0x0/0x0/0x0, omap 0x1451f, meta 0x3d5bae1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 5562368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:43.542240+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149975 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 137 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xec381d/0xf9c000, compress 0x0/0x0/0x0, omap 0x1451f, meta 0x3d5bae1), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 5390336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:44.542419+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 5390336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:45.542604+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 137 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xec381d/0xf9c000, compress 0x0/0x0/0x0, omap 0x1451f, meta 0x3d5bae1), peers [1,2] op hist [0,0,0,0,0,0,0,3])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.731677055s of 10.078829765s, submitted: 42
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 5701632 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:46.542778+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faee2000/0x0/0x4ffc00000, data 0xecdfcc/0xfa8000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 4562944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:47.543040+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 4562944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:48.543258+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151029 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 4562944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:49.543418+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faed5000/0x0/0x4ffc00000, data 0xedc6c9/0xfb7000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 4497408 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:50.543577+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:51.543752+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faeba000/0x0/0x4ffc00000, data 0xef76f2/0xfd2000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:52.543899+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 4268032 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:53.544052+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153741 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 4268032 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:54.544190+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88694784 unmapped: 4628480 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:55.544300+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 4603904 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:56.544446+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 4603904 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:57.544642+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae92000/0x0/0x4ffc00000, data 0xf1f503/0xffa000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 4603904 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.566334724s of 12.781497002s, submitted: 36
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:58.544781+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158537 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88989696 unmapped: 4333568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae67000/0x0/0x4ffc00000, data 0xf4a751/0x1025000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:59.544907+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88989696 unmapped: 4333568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:00.545042+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 4325376 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:01.545210+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae5d000/0x0/0x4ffc00000, data 0xf545d3/0x102f000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 3923968 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:02.545354+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 3923968 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:03.545563+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156873 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 3923968 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:04.545730+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:05.546015+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae32000/0x0/0x4ffc00000, data 0xf7d583/0x1059000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:06.546174+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:07.546342+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89038848 unmapped: 4284416 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:08.546495+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161725 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.631993294s of 10.691933632s, submitted: 33
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90374144 unmapped: 2949120 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:09.546721+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae02000/0x0/0x4ffc00000, data 0xfaede7/0x108a000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 2924544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:10.546903+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 2621440 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:11.547043+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 2621440 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:12.547209+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90710016 unmapped: 2613248 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:13.547389+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163151 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90849280 unmapped: 2473984 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:14.547536+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90849280 unmapped: 2473984 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:15.547657+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fadd5000/0x0/0x4ffc00000, data 0xfdd2ba/0x10b7000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90341376 unmapped: 2981888 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:16.547865+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90431488 unmapped: 2891776 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:17.548086+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90431488 unmapped: 2891776 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:18.548307+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167259 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90431488 unmapped: 2891776 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:19.548453+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.624699593s of 10.735450745s, submitted: 38
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fadbe000/0x0/0x4ffc00000, data 0xff222b/0x10cd000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90628096 unmapped: 2695168 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:20.548631+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90718208 unmapped: 2605056 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:21.548799+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90718208 unmapped: 2605056 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:22.548972+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 2703360 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:23.549178+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168531 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 2703360 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:24.549340+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fad6c000/0x0/0x4ffc00000, data 0x10463f2/0x1120000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 2703360 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:25.549532+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 2498560 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:26.549724+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 2482176 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:27.549901+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 2482176 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:28.550038+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169187 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fad25000/0x0/0x4ffc00000, data 0x108cd39/0x1167000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92184576 unmapped: 2187264 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:29.550175+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.091275215s of 10.204633713s, submitted: 47
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92184576 unmapped: 2187264 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:30.550318+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92356608 unmapped: 2015232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:31.550494+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91873280 unmapped: 2498560 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:32.550656+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91873280 unmapped: 2498560 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:33.550795+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174183 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4face3000/0x0/0x4ffc00000, data 0x10cd692/0x11a9000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91873280 unmapped: 2498560 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:34.550933+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4facc6000/0x0/0x4ffc00000, data 0x10ea658/0x11c6000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 2449408 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:35.551057+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4facbb000/0x0/0x4ffc00000, data 0x10f617e/0x11d1000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:36.551202+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92151808 unmapped: 2220032 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:37.551468+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92151808 unmapped: 2220032 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:38.551647+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93380608 unmapped: 991232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181625 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:39.551773+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93380608 unmapped: 991232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:40.551914+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93380608 unmapped: 991232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.397225380s of 10.507322311s, submitted: 62
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:41.552064+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93577216 unmapped: 794624 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac75000/0x0/0x4ffc00000, data 0x113c22f/0x1217000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:42.552237+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 778240 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:43.552355+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 778240 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180405 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:44.552512+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:45.552688+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:46.552880+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:47.553043+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac74000/0x0/0x4ffc00000, data 0x113d483/0x1217000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:48.553200+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179687 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:49.553356+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:50.553505+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac74000/0x0/0x4ffc00000, data 0x113d483/0x1217000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:51.553673+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:52.553844+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:53.554017+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179687 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac74000/0x0/0x4ffc00000, data 0x113d483/0x1217000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:54.554135+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:55.554262+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.933678627s of 15.948211670s, submitted: 9
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:56.554394+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:57.554555+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac74000/0x0/0x4ffc00000, data 0x113d51e/0x1218000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:58.554700+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178837 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:59.554833+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:00.554979+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:01.555089+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:02.555271+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:03.555408+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fac70000/0x0/0x4ffc00000, data 0x113f088/0x121a000, compress 0x0/0x0/0x0, omap 0x14aac, meta 0x3d5b554), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181357 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:04.555591+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:05.555709+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:06.555879+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.532999039s of 10.572518349s, submitted: 22
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:07.556036+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:08.556183+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180637 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fac72000/0x0/0x4ffc00000, data 0x113f088/0x121a000, compress 0x0/0x0/0x0, omap 0x14aac, meta 0x3d5b554), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:09.556375+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:10.556559+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:11.556733+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:12.556886+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:13.557019+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac69000/0x0/0x4ffc00000, data 0x1140cc8/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189079 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac69000/0x0/0x4ffc00000, data 0x1140cc8/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:14.557346+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:15.557512+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:16.557648+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140cf6/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:17.557814+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:18.557926+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189173 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:19.558041+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.848041534s of 12.879414558s, submitted: 22
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:20.558174+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6c000/0x0/0x4ffc00000, data 0x1140cf4/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:21.558365+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:22.558805+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:23.558965+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190131 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140ca2/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:24.559210+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:25.559599+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:26.559727+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93642752 unmapped: 729088 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:27.559939+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93642752 unmapped: 729088 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:28.560212+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93642752 unmapped: 729088 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189173 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:29.560386+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140ca2/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:30.560664+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:31.560874+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140c5d/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:32.561194+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.930911064s of 12.951424599s, submitted: 9
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:33.561376+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140c5d/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 704512 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189317 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:34.561508+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 704512 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:35.561680+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 704512 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:36.561807+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93700096 unmapped: 1720320 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:37.562030+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 1712128 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fac66000/0x0/0x4ffc00000, data 0x1142807/0x1223000, compress 0x0/0x0/0x0, omap 0x15051, meta 0x3d5afaf), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:38.562230+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 1712128 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194437 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:39.562385+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 1703936 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:40.562620+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 1835008 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fac68000/0x0/0x4ffc00000, data 0x114287b/0x1224000, compress 0x0/0x0/0x0, omap 0x15051, meta 0x3d5afaf), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:41.562862+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 1802240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:42.563150+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 1802240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.979999542s of 10.044813156s, submitted: 35
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:43.563301+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93675520 unmapped: 1744896 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196607 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:44.563443+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:45.563584+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac62000/0x0/0x4ffc00000, data 0x1145e0c/0x1228000, compress 0x0/0x0/0x0, omap 0x155c2, meta 0x3d5aa3e), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:46.563721+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:47.563862+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:48.564014+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199381 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:49.564184+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac62000/0x0/0x4ffc00000, data 0x1145e0c/0x1228000, compress 0x0/0x0/0x0, omap 0x155c2, meta 0x3d5aa3e), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:50.564344+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:51.564516+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93749248 unmapped: 1671168 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:52.564667+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93749248 unmapped: 1671168 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac62000/0x0/0x4ffc00000, data 0x1145e0c/0x1228000, compress 0x0/0x0/0x0, omap 0x155c2, meta 0x3d5aa3e), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:53.564828+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93749248 unmapped: 1671168 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.310994148s of 11.361922264s, submitted: 33
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202155 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:54.565052+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93765632 unmapped: 1654784 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:55.565223+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:56.565373+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:57.565554+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:58.565722+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fac59000/0x0/0x4ffc00000, data 0x114952b/0x122f000, compress 0x0/0x0/0x0, omap 0x15b6a, meta 0x3d5a496), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207085 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:59.565815+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:00.565955+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:01.566077+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 1638400 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:02.566308+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 1638400 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fac55000/0x0/0x4ffc00000, data 0x114b258/0x1234000, compress 0x0/0x0/0x0, omap 0x15e44, meta 0x3d5a1bc), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:03.566467+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 1638400 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212061 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:04.566607+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 1638400 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.653561592s of 10.730669022s, submitted: 58
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:05.566986+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fac52000/0x0/0x4ffc00000, data 0x114cc92/0x1236000, compress 0x0/0x0/0x0, omap 0x16112, meta 0x3d59eee), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 1581056 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:06.567154+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 1581056 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:07.567308+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 1581056 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fac52000/0x0/0x4ffc00000, data 0x114cbcb/0x1235000, compress 0x0/0x0/0x0, omap 0x16112, meta 0x3d59eee), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:08.567434+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 1630208 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211833 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:09.567581+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 1630208 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:10.567724+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 1630208 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 147 handle_osd_map epochs [148,149], i have 147, src has [1,149]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:11.567871+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93757440 unmapped: 1662976 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:12.568001+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fac4f000/0x0/0x4ffc00000, data 0x115021f/0x123a000, compress 0x0/0x0/0x0, omap 0x163dc, meta 0x3d59c24), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:13.568142+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218533 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:14.568291+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:15.568434+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.520431519s of 10.585706711s, submitted: 58
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:16.568713+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:17.568906+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fac51000/0x0/0x4ffc00000, data 0x1150252/0x123a000, compress 0x0/0x0/0x0, omap 0x163dc, meta 0x3d59c24), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:18.569026+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224113 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:19.569163+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:20.569299+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:21.569421+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac47000/0x0/0x4ffc00000, data 0x11538ff/0x1240000, compress 0x0/0x0/0x0, omap 0x16981, meta 0x3d5967f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac47000/0x0/0x4ffc00000, data 0x11538ff/0x1240000, compress 0x0/0x0/0x0, omap 0x16981, meta 0x3d5967f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:22.569631+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:23.569780+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222929 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:24.569982+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:25.570160+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:26.571211+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.172570229s of 11.231684685s, submitted: 44
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 1974272 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:27.572197+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 1974272 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac3d000/0x0/0x4ffc00000, data 0x1161566/0x124e000, compress 0x0/0x0/0x0, omap 0x16981, meta 0x3d5967f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:28.573033+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 1974272 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:29.573758+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228219 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 1957888 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:30.574314+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fabf4000/0x0/0x4ffc00000, data 0x11a64cc/0x1295000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 1597440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:31.574936+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 1597440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:32.575555+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94003200 unmapped: 1417216 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:33.576081+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fabf4000/0x0/0x4ffc00000, data 0x11a64cc/0x1295000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94003200 unmapped: 1417216 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:34.576590+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237541 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 712704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:35.576840+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 712704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fabc4000/0x0/0x4ffc00000, data 0x11d9022/0x12c7000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:36.577240+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 712704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.599710464s of 10.676040649s, submitted: 52
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:37.577403+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94814208 unmapped: 1654784 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:38.577741+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 1335296 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:39.578089+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235651 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fab80000/0x0/0x4ffc00000, data 0x121d659/0x130c000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 1335296 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:40.578246+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 1335296 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fab80000/0x0/0x4ffc00000, data 0x121d659/0x130c000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:41.578515+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 1949696 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:42.578804+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 1785856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:43.579088+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 1785856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:44.579317+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245715 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96272384 unmapped: 1245184 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4faaef000/0x0/0x4ffc00000, data 0x12ab5d1/0x139b000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:45.579468+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 950272 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4faac5000/0x0/0x4ffc00000, data 0x12d5e1b/0x13c5000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:46.579651+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 819200 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:47.579898+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 573440 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.532475471s of 10.651388168s, submitted: 68
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:48.580026+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 573440 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:49.580260+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248267 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 573440 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:50.580417+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97009664 unmapped: 507904 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4faa71000/0x0/0x4ffc00000, data 0x132b958/0x141b000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:51.580693+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97001472 unmapped: 1564672 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:52.580879+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8145 writes, 31K keys, 8145 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8145 writes, 1973 syncs, 4.13 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2453 writes, 7270 keys, 2453 commit groups, 1.0 writes per commit group, ingest: 9.86 MB, 0.02 MB/s
                                           Interval WAL: 2453 writes, 1058 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97001472 unmapped: 1564672 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4faa31000/0x0/0x4ffc00000, data 0x136b62c/0x145a000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:53.581053+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97255424 unmapped: 1310720 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:54.581222+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253769 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1859584 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:55.581352+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97968128 unmapped: 1646592 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:56.581535+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98230272 unmapped: 1384448 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:57.581776+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98230272 unmapped: 1384448 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa9d2000/0x0/0x4ffc00000, data 0x13ca761/0x14b9000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:58.581943+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.463150978s of 10.566673279s, submitted: 59
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 1097728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa9d2000/0x0/0x4ffc00000, data 0x13ca761/0x14b9000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:59.582080+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259253 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 917504 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa981000/0x0/0x4ffc00000, data 0x141b86f/0x150a000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:00.582210+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97968128 unmapped: 2695168 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:01.582323+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa96c000/0x0/0x4ffc00000, data 0x14320c0/0x1520000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [0,0,0,0,0,0,1])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98181120 unmapped: 2482176 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:02.582438+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa96c000/0x0/0x4ffc00000, data 0x14320c0/0x1520000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 2375680 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:03.582606+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 2375680 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:04.582744+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262117 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 2375680 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:05.582874+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 1114112 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:06.583034+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 1114112 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc ms_handle_reset ms_handle_reset con 0x561165051800
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: get_auth_request con 0x561165735000 auth_method 0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_configure stats_period=5
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:07.583258+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99393536 unmapped: 1269760 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14cfe54/0x15be000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:08.583640+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.120785713s of 10.410181046s, submitted: 59
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1933312 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:09.583809+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266233 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1933312 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:10.583970+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa883000/0x0/0x4ffc00000, data 0x15196aa/0x1608000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 1974272 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:11.584184+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99901440 unmapped: 1810432 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:12.584322+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1638400 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 ms_handle_reset con 0x561165051c00 session 0x561163af4a80
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165780400
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:13.584444+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 1687552 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:14.584588+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269981 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa84c000/0x0/0x4ffc00000, data 0x15508d1/0x163f000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 1687552 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:15.584712+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:16.584846+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:17.584999+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:18.585197+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:19.585433+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266237 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.895224571s of 11.019706726s, submitted: 40
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x155539e/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:20.585583+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:21.585720+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x15553d0/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:22.585834+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:23.585985+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1671168 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:24.586125+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265949 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x155539e/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1671168 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:25.586265+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1671168 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:26.586400+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1671168 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:27.586610+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:28.586752+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x1555496/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:29.586912+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267657 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:30.587053+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.693873405s of 10.723943710s, submitted: 15
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:31.587394+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:32.587609+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:33.587768+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165781c00
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555465/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:34.587915+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274281 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:35.588146+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 14
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:36.588418+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:37.588659+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 1646592 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:38.588807+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fb/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 1646592 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:39.588971+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270147 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 1646592 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:40.589133+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.204983711s of 10.259576797s, submitted: 25
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 1646592 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:41.589279+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1638400 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:42.589435+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1638400 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:43.589579+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x1555468/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1638400 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:44.589759+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269413 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:45.589884+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:46.590009+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:47.590204+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:48.590360+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x155552f/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:49.590506+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271121 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x155552f/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:50.590661+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.689720154s of 10.207288742s, submitted: 16
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:51.590810+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fb/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:52.590951+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:53.591135+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:54.591255+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272669 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:55.591423+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa844000/0x0/0x4ffc00000, data 0x15555f7/0x1646000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:56.591556+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100089856 unmapped: 1622016 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa844000/0x0/0x4ffc00000, data 0x15555f7/0x1646000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:57.591715+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:58.591844+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:59.592019+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fc/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272063 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:00.592192+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:01.592394+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.431275368s of 10.478595734s, submitted: 21
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:02.592619+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:03.592784+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555467/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:04.592943+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270371 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:05.593130+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:06.593254+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:07.593457+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555435/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:08.593605+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:09.593733+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272079 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:10.593865+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:11.593981+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:12.594140+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:13.594295+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x1555597/0x1646000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x1555597/0x1646000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:14.594429+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273611 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.974966049s of 13.371937752s, submitted: 23
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:15.594591+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x15554d0/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:16.594773+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:17.594946+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fe/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:18.595064+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:19.595203+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271329 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 704512 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:20.595332+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 704512 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:21.595500+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555435/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 704512 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:22.595739+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 679936 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:23.595906+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 638976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:24.596066+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555531/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [1])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272717 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.805570602s of 10.000674248s, submitted: 108
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:25.596185+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 647168 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555531/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:26.596340+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555531/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:27.596503+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:28.596672+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fc/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:29.596817+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273021 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:30.596959+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:31.597147+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555467/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:32.597316+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x15553d0/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:33.597479+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 630784 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:34.597690+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 630784 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271697 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:35.597886+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 630784 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.005593300s of 11.149421692s, submitted: 49
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:36.598050+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 622592 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x155539e/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:37.598348+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103211008 unmapped: 598016 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:38.598472+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103211008 unmapped: 598016 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:39.598631+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 589824 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275191 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x1556fd6/0x1646000, compress 0x0/0x0/0x0, omap 0x16fc0, meta 0x3d59040), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:40.598754+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 589824 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x1556fd6/0x1646000, compress 0x0/0x0/0x0, omap 0x16fc0, meta 0x3d59040), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:41.598904+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 581632 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:42.599066+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 581632 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x1556fa3/0x1646000, compress 0x0/0x0/0x0, omap 0x16fc0, meta 0x3d59040), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:43.599255+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 581632 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:44.599402+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 581632 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277965 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:45.599560+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 573440 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:46.599740+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 573440 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa840000/0x0/0x4ffc00000, data 0x1558a54/0x1649000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:47.599917+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 573440 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.530984879s of 11.597694397s, submitted: 42
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:48.600153+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 557056 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:49.600284+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 557056 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278937 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:50.600467+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 557056 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa840000/0x0/0x4ffc00000, data 0x1558abd/0x164a000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:51.600745+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 557056 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa840000/0x0/0x4ffc00000, data 0x1558abd/0x164a000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:52.600873+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:53.601058+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:54.601274+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280501 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:55.601502+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:56.601712+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:57.602029+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa83e000/0x0/0x4ffc00000, data 0x1558c4e/0x164c000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.142219543s of 10.170134544s, submitted: 14
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 154 ms_handle_reset con 0x561165781c00 session 0x5611630bce00
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0x1558bb3/0x164b000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:58.602265+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103448576 unmapped: 360448 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:59.602465+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103448576 unmapped: 360448 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 15
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281299 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:00.602726+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 335872 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:01.602931+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103489536 unmapped: 319488 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:02.603117+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103489536 unmapped: 319488 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa83d000/0x0/0x4ffc00000, data 0x155a7b8/0x164e000, compress 0x0/0x0/0x0, omap 0x17563, meta 0x3d58a9d), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:03.603334+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:04.603543+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284073 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:05.603790+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:06.604046+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0x155c237/0x1651000, compress 0x0/0x0/0x0, omap 0x1784b, meta 0x3d587b5), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:07.604339+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0x155c208/0x1651000, compress 0x0/0x0/0x0, omap 0x1784b, meta 0x3d587b5), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:08.604481+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:09.604661+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287711 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0x155c208/0x1651000, compress 0x0/0x0/0x0, omap 0x1784b, meta 0x3d587b5), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:10.604818+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0x155c208/0x1651000, compress 0x0/0x0/0x0, omap 0x1784b, meta 0x3d587b5), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:11.604984+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.780637741s of 14.128336906s, submitted: 178
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:12.605137+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:13.605303+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:14.605486+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284917 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:15.605617+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0x155dc7b/0x1651000, compress 0x0/0x0/0x0, omap 0x17b33, meta 0x3d584cd), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:16.605797+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:17.605973+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:18.606159+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:19.606335+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288411 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0x155dc7b/0x1651000, compress 0x0/0x0/0x0, omap 0x17b33, meta 0x3d584cd), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:20.606466+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:21.606615+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:22.606745+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:23.606843+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.096637726s of 12.130258560s, submitted: 22
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:24.607006+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0x155dc7b/0x1651000, compress 0x0/0x0/0x0, omap 0x17b33, meta 0x3d584cd), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291185 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:25.607183+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:26.607363+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0x155f6fa/0x1654000, compress 0x0/0x0/0x0, omap 0x17e1c, meta 0x3d581e4), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:27.607546+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:28.607707+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:29.607872+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0x155f6fa/0x1654000, compress 0x0/0x0/0x0, omap 0x17e1c, meta 0x3d581e4), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291185 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:30.608000+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:31.608215+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:32.608403+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 1327104 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:33.608529+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 1327104 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:34.608678+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 1327104 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0x155f6fa/0x1654000, compress 0x0/0x0/0x0, omap 0x17e1c, meta 0x3d581e4), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291185 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:35.608873+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 1327104 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:36.609016+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0x155f6fa/0x1654000, compress 0x0/0x0/0x0, omap 0x17e1c, meta 0x3d581e4), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.265516281s of 12.271112442s, submitted: 47
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:37.609163+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:38.609335+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:39.609498+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa833000/0x0/0x4ffc00000, data 0x15612ff/0x1657000, compress 0x0/0x0/0x0, omap 0x1806d, meta 0x3d57f93), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293959 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:40.610158+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:41.610685+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:42.611166+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa833000/0x0/0x4ffc00000, data 0x15612ff/0x1657000, compress 0x0/0x0/0x0, omap 0x1806d, meta 0x3d57f93), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:43.611716+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:44.612142+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293959 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:45.612304+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:46.612710+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:47.613148+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:48.613465+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:49.613775+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:50.614028+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:51.614303+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:52.614515+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:53.614711+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:54.614939+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:55.615186+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:56.615361+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:57.615635+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:58.615833+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:59.615977+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:00.616170+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:01.616318+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:02.616443+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:03.616668+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:04.616885+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:05.617081+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:06.617380+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:07.617591+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:08.617771+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:09.617943+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:10.618080+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:11.618247+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:12.618384+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:13.618526+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:14.618660+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:15.618777+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:16.618924+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:17.619203+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:18.619358+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:19.619499+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:20.619671+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:21.619857+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:22.620023+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:23.620178+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:24.620293+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.869987488s of 47.977024078s, submitted: 49
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298425 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:25.620429+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1562e19/0x165b000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:26.620553+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:27.620694+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:28.620884+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:29.621046+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1562e19/0x165b000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:30.621184+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298425 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1562e19/0x165b000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:31.621279+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103538688 unmapped: 1318912 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:32.621443+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103538688 unmapped: 1318912 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa80c000/0x0/0x4ffc00000, data 0x158621e/0x1680000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:33.621620+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:34.621731+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.614818573s of 10.713614464s, submitted: 12
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:35.621856+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304117 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa7d5000/0x0/0x4ffc00000, data 0x15bf1d9/0x16b7000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 876544 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:36.622054+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 876544 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:37.622301+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 868352 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:38.622487+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 811008 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa7d0000/0x0/0x4ffc00000, data 0x15c0dde/0x16ba000, compress 0x0/0x0/0x0, omap 0x18612, meta 0x3d579ee), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:39.622643+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa7b5000/0x0/0x4ffc00000, data 0x15dbea1/0x16d5000, compress 0x0/0x0/0x0, omap 0x18612, meta 0x3d579ee), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 811008 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:40.622786+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311547 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1269760 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:41.622961+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1269760 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:42.623123+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104923136 unmapped: 983040 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:43.623248+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104923136 unmapped: 983040 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa757000/0x0/0x4ffc00000, data 0x163c982/0x1735000, compress 0x0/0x0/0x0, omap 0x18612, meta 0x3d579ee), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:44.623401+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 1179648 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.679551125s of 10.199654579s, submitted: 49
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:45.623533+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313163 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa751000/0x0/0x4ffc00000, data 0x163e8af/0x1739000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104128512 unmapped: 1777664 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:46.623799+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104128512 unmapped: 1777664 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:47.624059+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104095744 unmapped: 1810432 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:48.624436+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104095744 unmapped: 1810432 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:49.624739+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104112128 unmapped: 1794048 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:50.624942+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317627 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa6f9000/0x0/0x4ffc00000, data 0x1698ed9/0x1793000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105136128 unmapped: 1818624 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:51.625222+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa6f9000/0x0/0x4ffc00000, data 0x1698ed9/0x1793000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 1630208 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:52.625438+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa6f9000/0x0/0x4ffc00000, data 0x1698ed9/0x1793000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 1441792 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:53.625670+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 16
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 1441792 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:54.625911+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 1433600 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.847637177s of 10.000375748s, submitted: 37
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:55.626054+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319963 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 17
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 1425408 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:56.626269+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 1425408 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:57.626561+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa697000/0x0/0x4ffc00000, data 0x16fb25f/0x17f5000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 1343488 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:58.626758+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa697000/0x0/0x4ffc00000, data 0x16fb25f/0x17f5000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 1343488 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:59.627016+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 1343488 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:00.627203+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:01.627475+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:02.627751+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:03.627911+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:04.628082+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:05.628279+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:06.628467+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:07.628638+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:08.628915+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:09.629134+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:10.629378+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:11.629532+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:12.629748+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:13.629899+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:14.630250+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:15.630475+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:16.630678+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:17.630920+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:18.631057+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:19.631294+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:20.631448+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:21.631602+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:22.631723+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:23.631859+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:24.632006+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:25.632157+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:26.632304+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:27.632554+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:28.632701+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:29.632882+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:30.633039+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:31.633164+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:32.633365+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:33.633573+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:34.633704+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:35.633875+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:36.634048+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:37.634319+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:38.634446+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:39.634563+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:40.634705+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:41.634831+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:42.635017+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:43.635187+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:44.635400+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:45.635612+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:46.635786+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:47.636063+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:48.636302+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:49.636546+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:50.637171+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106012672 unmapped: 1990656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:51.637310+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106012672 unmapped: 1990656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:52.637688+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106012672 unmapped: 1990656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:53.638001+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106012672 unmapped: 1990656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:54.638197+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 59.528240204s of 60.003269196s, submitted: 7
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:55.638499+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106020864 unmapped: 1982464 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa669000/0x0/0x4ffc00000, data 0x1728b68/0x1823000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:56.638915+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106020864 unmapped: 1982464 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319891 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:57.639241+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106020864 unmapped: 1982464 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:58.639504+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 1810432 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:59.639716+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 1843200 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 18
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:00.640224+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 1785856 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:01.640529+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa648000/0x0/0x4ffc00000, data 0x174a3f8/0x1844000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 1728512 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321771 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:02.640827+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105316352 unmapped: 2686976 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:03.641015+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105316352 unmapped: 2686976 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:04.641497+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105316352 unmapped: 2686976 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa624000/0x0/0x4ffc00000, data 0x176dc69/0x1868000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.810779572s of 10.002739906s, submitted: 154
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:05.641654+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105472000 unmapped: 2531328 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:06.642196+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 2482176 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330825 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:07.642427+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 2301952 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:08.642680+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 2301952 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:09.642881+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 2301952 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:10.643192+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 2498560 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa5bc000/0x0/0x4ffc00000, data 0x17d43e5/0x18d0000, compress 0x0/0x0/0x0, omap 0x18bea, meta 0x3d57416), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:11.643431+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 2498560 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328873 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:12.643546+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 2498560 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:13.643832+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 2473984 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:14.644049+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 2473984 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:15.644229+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.534772873s of 10.144953728s, submitted: 35
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa5b7000/0x0/0x4ffc00000, data 0x17d5e64/0x18d3000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:16.644416+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331375 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:17.644759+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:18.645038+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:19.645221+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa5b7000/0x0/0x4ffc00000, data 0x17d5e64/0x18d3000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:20.645400+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 1318912 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:21.645559+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 1318912 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333655 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:22.645708+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106749952 unmapped: 1253376 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:23.645865+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 2367488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:24.646029+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 2367488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:25.646215+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa585000/0x0/0x4ffc00000, data 0x1809e86/0x1907000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 2383872 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:26.646390+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 2383872 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335447 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:27.646582+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 2383872 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.460929871s of 12.507403374s, submitted: 24
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:28.646759+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106029056 unmapped: 3022848 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:29.646930+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106029056 unmapped: 3022848 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa540000/0x0/0x4ffc00000, data 0x184ef2d/0x194c000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:30.647151+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 2908160 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa527000/0x0/0x4ffc00000, data 0x186811d/0x1965000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:31.647357+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 2859008 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337655 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:32.647528+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2400256 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:33.647711+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2400256 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:34.647882+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2400256 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:35.648051+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 2179072 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:36.648202+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa4d3000/0x0/0x4ffc00000, data 0x18bbec7/0x19b9000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 2195456 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343511 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:37.648380+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 2195456 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:38.648558+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 2195456 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.962003708s of 11.394869804s, submitted: 20
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:39.648684+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 1851392 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:40.648861+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 1851392 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0x1912ebd/0x1a11000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:41.649028+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 1785856 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343393 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:42.649183+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 1785856 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:43.649336+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106758144 unmapped: 2293760 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:44.649482+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106758144 unmapped: 2293760 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:45.649634+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 1998848 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x195fd37/0x1a5d000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:46.649760+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 1753088 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348953 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:47.649933+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 1753088 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:48.650075+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 1753088 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:49.650219+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:50.650454+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:51.650757+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348953 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:52.650911+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:53.651249+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:54.651453+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:55.651676+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:56.651878+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348953 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.184410095s of 17.946382523s, submitted: 20
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:57.652123+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:58.652278+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:59.652446+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:00.652588+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:01.652737+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347205 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:02.652892+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:03.653055+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:04.653227+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:05.653383+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:06.653538+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346917 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:07.653928+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:08.654193+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:09.654393+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.624203682s of 12.761359215s, submitted: 4
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:10.654585+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:11.654713+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348719 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:12.654853+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:13.655032+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:14.655203+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa421000/0x0/0x4ffc00000, data 0x196b68b/0x1a69000, compress 0x0/0x0/0x0, omap 0x19191, meta 0x3d56e6f), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:15.655428+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:16.655633+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:17.655912+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:18.656178+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:19.656396+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:20.656535+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:21.656669+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:22.656869+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:23.657084+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:24.657412+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:25.659500+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:26.662074+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:27.663076+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:28.663781+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:29.664930+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:30.665700+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:31.667080+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:32.668044+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:33.668739+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:34.669148+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:35.669429+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:36.670312+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:37.670743+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:38.671025+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:39.671492+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:40.671907+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:41.672081+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:42.672299+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:43.672774+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:44.673175+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:45.673596+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:46.673922+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:47.674343+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:48.674655+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:49.674871+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:50.675196+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:51.675357+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:52.675549+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:53.675710+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:54.675862+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:55.676033+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:56.676185+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:57.676401+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:58.676558+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:59.676721+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:00.676847+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:01.676980+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:02.677154+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:03.677255+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:04.677420+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:05.677577+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:06.677727+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:07.677932+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:08.678177+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:09.678334+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:10.678541+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:11.678723+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:12.679008+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:13.679169+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:14.679338+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:15.679480+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:16.679644+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:17.679902+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:18.680073+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:19.680260+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:20.680396+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:21.680513+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:22.680641+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:23.680772+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:24.680956+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:25.681120+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:26.681256+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:27.681470+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:28.681595+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 19
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:29.681712+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 79.399703979s of 79.442108154s, submitted: 30
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 ms_handle_reset con 0x561165781000 session 0x561163af5a40
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 2539520 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:30.681838+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 2539520 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:31.681964+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 10:53:06 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 2539520 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 20
Dec 04 10:53:06 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:32.682128+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 2498560 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 10:53:06 compute-0 ceph-osd[86021]: do_command 'config diff' '{prefix=config diff}'
Dec 04 10:53:06 compute-0 ceph-osd[86021]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:33.682246+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: do_command 'config show' '{prefix=config show}'
Dec 04 10:53:06 compute-0 ceph-osd[86021]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 04 10:53:06 compute-0 ceph-osd[86021]: do_command 'counter dump' '{prefix=counter dump}'
Dec 04 10:53:06 compute-0 ceph-osd[86021]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 04 10:53:06 compute-0 ceph-osd[86021]: do_command 'counter schema' '{prefix=counter schema}'
Dec 04 10:53:06 compute-0 ceph-osd[86021]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106455040 unmapped: 3645440 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:34.682352+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 3637248 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 10:53:06 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:35.682478+0000)
Dec 04 10:53:06 compute-0 ceph-osd[86021]: do_command 'log dump' '{prefix=log dump}'
Dec 04 10:53:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14660 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec 04 10:53:07 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:53:07 compute-0 ceph-mon[75358]: from='client.14652 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:07 compute-0 ceph-mon[75358]: from='client.14654 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:07 compute-0 ceph-mon[75358]: pgmap v1316: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:07 compute-0 ceph-mon[75358]: from='client.14656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:53:07 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 10:53:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 04 10:53:07 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524562850' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 04 10:53:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 04 10:53:07 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/555356406' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 04 10:53:07 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14668 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:08 compute-0 ceph-mon[75358]: from='client.14660 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2524562850' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 04 10:53:08 compute-0 ceph-mon[75358]: from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/555356406' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 04 10:53:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 04 10:53:08 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3074315381' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 04 10:53:08 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14672 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 04 10:53:08 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1138772843' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 04 10:53:09 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 04 10:53:09 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 04 10:53:09 compute-0 ceph-mon[75358]: from='client.14668 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:09 compute-0 ceph-mon[75358]: pgmap v1317: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:09 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3074315381' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 04 10:53:09 compute-0 ceph-mon[75358]: from='client.14672 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:09 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1138772843' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 04 10:53:09 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 04 10:53:09 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 04 10:53:09 compute-0 systemd[1]: Starting Hostname Service...
Dec 04 10:53:09 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 04 10:53:09 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 04 10:53:09 compute-0 systemd[1]: Started Hostname Service.
Dec 04 10:53:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 04 10:53:09 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554650943' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 04 10:53:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14686 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:10 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 04 10:53:10 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 04 10:53:10 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3554650943' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 04 10:53:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 04 10:53:10 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067542540' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 04 10:53:11 compute-0 ceph-mon[75358]: from='client.14686 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:11 compute-0 ceph-mon[75358]: pgmap v1318: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3067542540' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 04 10:53:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Dec 04 10:53:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500722124' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 04 10:53:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:53:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372385571' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:53:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:53:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372385571' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:53:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 04 10:53:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675633290' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 04 10:53:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3500722124' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 04 10:53:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3372385571' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:53:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3372385571' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:53:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/675633290' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 04 10:53:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 04 10:53:12 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115682405' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 04 10:53:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14700 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:13 compute-0 ceph-mon[75358]: pgmap v1319: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Dec 04 10:53:13 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2115682405' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 04 10:53:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 04 10:53:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828148926' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 04 10:53:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 04 10:53:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3632105724' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 04 10:53:14 compute-0 ceph-mon[75358]: from='client.14700 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:14 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1828148926' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 04 10:53:14 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3632105724' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 04 10:53:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 04 10:53:14 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14706 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 04 10:53:14 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022869857' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 04 10:53:15 compute-0 ceph-mon[75358]: pgmap v1320: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 04 10:53:15 compute-0 ceph-mon[75358]: from='client.14706 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:15 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4022869857' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 04 10:53:15 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14710 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:15 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14712 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec 04 10:53:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959224964' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Dec 04 10:53:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:16 compute-0 ceph-mon[75358]: from='client.14710 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:16 compute-0 ceph-mon[75358]: from='client.14712 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:16 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1959224964' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Dec 04 10:53:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec 04 10:53:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573602071' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Dec 04 10:53:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14718 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:17 compute-0 ceph-mon[75358]: pgmap v1321: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:17 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1573602071' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14720 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:17 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:53:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec 04 10:53:18 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280383602' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Dec 04 10:53:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:18 compute-0 ovs-appctl[265835]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 04 10:53:18 compute-0 ceph-mon[75358]: from='client.14718 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:18 compute-0 ceph-mon[75358]: from='client.14720 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:18 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4280383602' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Dec 04 10:53:18 compute-0 ovs-appctl[265846]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 04 10:53:18 compute-0 ovs-appctl[265865]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 04 10:53:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec 04 10:53:18 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543130922' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Dec 04 10:53:19 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14726 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:19 compute-0 ceph-mon[75358]: pgmap v1322: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:19 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2543130922' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Dec 04 10:53:19 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14728 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:19 compute-0 podman[266484]: 2025-12-04 10:53:19.977861642 +0000 UTC m=+0.079468017 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 04 10:53:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 04 10:53:20 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1202307606' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 04 10:53:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:20 compute-0 ceph-mon[75358]: from='client.14726 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:20 compute-0 ceph-mon[75358]: from='client.14728 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 10:53:20 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1202307606' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 04 10:53:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Dec 04 10:53:20 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691450633' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Dec 04 10:53:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Dec 04 10:53:21 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887495222' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:21 compute-0 ceph-mon[75358]: pgmap v1323: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:21 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2691450633' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Dec 04 10:53:21 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1887495222' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:21 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14736 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:21 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Dec 04 10:53:22 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1661471685' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 04 10:53:22 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec 04 10:53:22 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2090149102' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Dec 04 10:53:22 compute-0 ceph-mon[75358]: from='client.14736 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:22 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1661471685' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 04 10:53:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Dec 04 10:53:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388312021' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Dec 04 10:53:23 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1824423938' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Dec 04 10:53:24 compute-0 ceph-mon[75358]: pgmap v1324: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:24 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2090149102' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Dec 04 10:53:24 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2388312021' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:24 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1824423938' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Dec 04 10:53:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:24 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14746 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Dec 04 10:53:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181941954' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Dec 04 10:53:25 compute-0 ceph-mon[75358]: pgmap v1325: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:25 compute-0 ceph-mon[75358]: from='client.14746 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:25 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2181941954' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Dec 04 10:53:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Dec 04 10:53:25 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3953734450' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:26 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14752 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:26 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3953734450' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Dec 04 10:53:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026114614' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Dec 04 10:53:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:53:26
Dec 04 10:53:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:53:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:53:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.data']
Dec 04 10:53:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:53:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14756 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:27 compute-0 ceph-mon[75358]: from='client.14752 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:27 compute-0 ceph-mon[75358]: pgmap v1326: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:27 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2026114614' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Dec 04 10:53:27 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14758 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:27 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Dec 04 10:53:27 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3010044204' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:28 compute-0 ceph-mon[75358]: from='client.14756 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:28 compute-0 ceph-mon[75358]: from='client.14758 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:28 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3010044204' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Dec 04 10:53:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec 04 10:53:28 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2746445434' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:53:28 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14764 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:29 compute-0 podman[267378]: 2025-12-04 10:53:29.060165942 +0000 UTC m=+0.066975368 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 04 10:53:29 compute-0 podman[267380]: 2025-12-04 10:53:29.104129815 +0000 UTC m=+0.105007155 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:53:29 compute-0 ceph-mon[75358]: pgmap v1327: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:29 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2746445434' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Dec 04 10:53:29 compute-0 sshd-session[267443]: error: kex_exchange_identification: read: Connection reset by peer
Dec 04 10:53:29 compute-0 sshd-session[267443]: Connection reset by 45.140.17.97 port 27303
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14766 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:53:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Dec 04 10:53:29 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979935702' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:53:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:53:29 compute-0 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 04 10:53:30 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Dec 04 10:53:30 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4194483362' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Dec 04 10:53:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:30 compute-0 ceph-mon[75358]: from='client.14764 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:30 compute-0 ceph-mon[75358]: from='client.14766 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:30 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2979935702' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 04 10:53:30 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4194483362' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Dec 04 10:53:30 compute-0 systemd[1]: Starting Time & Date Service...
Dec 04 10:53:30 compute-0 systemd[1]: Started Time & Date Service.
Dec 04 10:53:30 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14772 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:31 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14774 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:31 compute-0 ceph-mon[75358]: pgmap v1328: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:31 compute-0 ceph-mon[75358]: from='client.14772 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 04 10:53:31 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738873796' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:53:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Dec 04 10:53:32 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773558366' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Dec 04 10:53:32 compute-0 ceph-mon[75358]: from='client.14774 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 10:53:32 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3738873796' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 04 10:53:32 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/773558366' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Dec 04 10:53:33 compute-0 ceph-mon[75358]: pgmap v1329: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:35 compute-0 ceph-mon[75358]: pgmap v1330: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:36 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:53:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:53:37 compute-0 ceph-mon[75358]: pgmap v1331: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:39 compute-0 ceph-mon[75358]: pgmap v1332: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:41 compute-0 sudo[267990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:53:41 compute-0 sudo[267990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:53:41 compute-0 sudo[267990]: pam_unix(sudo:session): session closed for user root
Dec 04 10:53:41 compute-0 sudo[268015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:53:41 compute-0 sudo[268015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:53:41 compute-0 ceph-mon[75358]: pgmap v1333: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:41 compute-0 sudo[268015]: pam_unix(sudo:session): session closed for user root
Dec 04 10:53:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:53:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:53:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:53:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:53:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:53:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:53:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:53:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:53:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:53:41 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:53:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:53:41 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:53:41 compute-0 sudo[268070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:53:41 compute-0 sudo[268070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:53:41 compute-0 sudo[268070]: pam_unix(sudo:session): session closed for user root
Dec 04 10:53:41 compute-0 sudo[268095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:53:41 compute-0 sudo[268095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:53:41 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:42 compute-0 podman[268132]: 2025-12-04 10:53:42.076460828 +0000 UTC m=+0.047818418 container create 561538dd198f96f95fbca19e1c101ecd6f4707adbd7d3d85927015133f494278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclaren, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 04 10:53:42 compute-0 systemd[1]: Started libpod-conmon-561538dd198f96f95fbca19e1c101ecd6f4707adbd7d3d85927015133f494278.scope.
Dec 04 10:53:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:53:42 compute-0 podman[268132]: 2025-12-04 10:53:42.053278257 +0000 UTC m=+0.024635877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:53:42 compute-0 podman[268132]: 2025-12-04 10:53:42.164981706 +0000 UTC m=+0.136339316 container init 561538dd198f96f95fbca19e1c101ecd6f4707adbd7d3d85927015133f494278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:53:42 compute-0 podman[268132]: 2025-12-04 10:53:42.172627555 +0000 UTC m=+0.143985145 container start 561538dd198f96f95fbca19e1c101ecd6f4707adbd7d3d85927015133f494278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclaren, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 04 10:53:42 compute-0 podman[268132]: 2025-12-04 10:53:42.176250483 +0000 UTC m=+0.147608093 container attach 561538dd198f96f95fbca19e1c101ecd6f4707adbd7d3d85927015133f494278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclaren, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:53:42 compute-0 great_mclaren[268148]: 167 167
Dec 04 10:53:42 compute-0 systemd[1]: libpod-561538dd198f96f95fbca19e1c101ecd6f4707adbd7d3d85927015133f494278.scope: Deactivated successfully.
Dec 04 10:53:42 compute-0 podman[268132]: 2025-12-04 10:53:42.179891103 +0000 UTC m=+0.151248693 container died 561538dd198f96f95fbca19e1c101ecd6f4707adbd7d3d85927015133f494278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclaren, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:53:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffde52afc3c1baefcbd412730b52c3981fffaa60608312f50ed89ee3d81a1138-merged.mount: Deactivated successfully.
Dec 04 10:53:42 compute-0 podman[268132]: 2025-12-04 10:53:42.222787348 +0000 UTC m=+0.194144938 container remove 561538dd198f96f95fbca19e1c101ecd6f4707adbd7d3d85927015133f494278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclaren, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:53:42 compute-0 systemd[1]: libpod-conmon-561538dd198f96f95fbca19e1c101ecd6f4707adbd7d3d85927015133f494278.scope: Deactivated successfully.
Dec 04 10:53:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:42 compute-0 podman[268172]: 2025-12-04 10:53:42.424619435 +0000 UTC m=+0.046843763 container create 148120d37d2c8ccc43164a2c81654e2fb4ec0288b112650e0985bfe1fd833d87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:53:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:53:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:53:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:53:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:53:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:53:42 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:53:42 compute-0 systemd[1]: Started libpod-conmon-148120d37d2c8ccc43164a2c81654e2fb4ec0288b112650e0985bfe1fd833d87.scope.
Dec 04 10:53:42 compute-0 podman[268172]: 2025-12-04 10:53:42.403205308 +0000 UTC m=+0.025429666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:53:42 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350cfb8a0bbe84db8e2e7684b7f8cda8c9f9de1e73a5aad7660233b6f63dfe97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350cfb8a0bbe84db8e2e7684b7f8cda8c9f9de1e73a5aad7660233b6f63dfe97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350cfb8a0bbe84db8e2e7684b7f8cda8c9f9de1e73a5aad7660233b6f63dfe97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350cfb8a0bbe84db8e2e7684b7f8cda8c9f9de1e73a5aad7660233b6f63dfe97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350cfb8a0bbe84db8e2e7684b7f8cda8c9f9de1e73a5aad7660233b6f63dfe97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:42 compute-0 podman[268172]: 2025-12-04 10:53:42.521817058 +0000 UTC m=+0.144041556 container init 148120d37d2c8ccc43164a2c81654e2fb4ec0288b112650e0985bfe1fd833d87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:53:42 compute-0 podman[268172]: 2025-12-04 10:53:42.533642598 +0000 UTC m=+0.155866946 container start 148120d37d2c8ccc43164a2c81654e2fb4ec0288b112650e0985bfe1fd833d87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:53:42 compute-0 podman[268172]: 2025-12-04 10:53:42.537973835 +0000 UTC m=+0.160198313 container attach 148120d37d2c8ccc43164a2c81654e2fb4ec0288b112650e0985bfe1fd833d87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:53:43 compute-0 gallant_cartwright[268189]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:53:43 compute-0 gallant_cartwright[268189]: --> All data devices are unavailable
Dec 04 10:53:43 compute-0 systemd[1]: libpod-148120d37d2c8ccc43164a2c81654e2fb4ec0288b112650e0985bfe1fd833d87.scope: Deactivated successfully.
Dec 04 10:53:43 compute-0 podman[268172]: 2025-12-04 10:53:43.046340496 +0000 UTC m=+0.668564864 container died 148120d37d2c8ccc43164a2c81654e2fb4ec0288b112650e0985bfe1fd833d87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:53:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-350cfb8a0bbe84db8e2e7684b7f8cda8c9f9de1e73a5aad7660233b6f63dfe97-merged.mount: Deactivated successfully.
Dec 04 10:53:43 compute-0 podman[268172]: 2025-12-04 10:53:43.105585354 +0000 UTC m=+0.727809682 container remove 148120d37d2c8ccc43164a2c81654e2fb4ec0288b112650e0985bfe1fd833d87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:53:43 compute-0 systemd[1]: libpod-conmon-148120d37d2c8ccc43164a2c81654e2fb4ec0288b112650e0985bfe1fd833d87.scope: Deactivated successfully.
Dec 04 10:53:43 compute-0 sudo[268095]: pam_unix(sudo:session): session closed for user root
Dec 04 10:53:43 compute-0 sudo[268221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:53:43 compute-0 sudo[268221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:53:43 compute-0 sudo[268221]: pam_unix(sudo:session): session closed for user root
Dec 04 10:53:43 compute-0 sudo[268246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:53:43 compute-0 sudo[268246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:53:43 compute-0 podman[268283]: 2025-12-04 10:53:43.653823775 +0000 UTC m=+0.052552854 container create 036026c8f90974e2a834b3d603ea812d2096ad04492f31be6ae58128a0f3d9a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_visvesvaraya, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:53:43 compute-0 ceph-mon[75358]: pgmap v1334: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:43 compute-0 systemd[1]: Started libpod-conmon-036026c8f90974e2a834b3d603ea812d2096ad04492f31be6ae58128a0f3d9a6.scope.
Dec 04 10:53:43 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:53:43 compute-0 podman[268283]: 2025-12-04 10:53:43.626446002 +0000 UTC m=+0.025175121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:53:43 compute-0 podman[268283]: 2025-12-04 10:53:43.727403476 +0000 UTC m=+0.126132575 container init 036026c8f90974e2a834b3d603ea812d2096ad04492f31be6ae58128a0f3d9a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 04 10:53:43 compute-0 podman[268283]: 2025-12-04 10:53:43.733873476 +0000 UTC m=+0.132602555 container start 036026c8f90974e2a834b3d603ea812d2096ad04492f31be6ae58128a0f3d9a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:53:43 compute-0 podman[268283]: 2025-12-04 10:53:43.737225578 +0000 UTC m=+0.135954677 container attach 036026c8f90974e2a834b3d603ea812d2096ad04492f31be6ae58128a0f3d9a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:53:43 compute-0 zen_visvesvaraya[268300]: 167 167
Dec 04 10:53:43 compute-0 systemd[1]: libpod-036026c8f90974e2a834b3d603ea812d2096ad04492f31be6ae58128a0f3d9a6.scope: Deactivated successfully.
Dec 04 10:53:43 compute-0 podman[268283]: 2025-12-04 10:53:43.740082348 +0000 UTC m=+0.138811427 container died 036026c8f90974e2a834b3d603ea812d2096ad04492f31be6ae58128a0f3d9a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:53:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-12a846b31f694ae92b7fb1c48789ccba5b950d7407a541364db94f073b92145e-merged.mount: Deactivated successfully.
Dec 04 10:53:43 compute-0 podman[268283]: 2025-12-04 10:53:43.790869138 +0000 UTC m=+0.189598217 container remove 036026c8f90974e2a834b3d603ea812d2096ad04492f31be6ae58128a0f3d9a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:53:43 compute-0 systemd[1]: libpod-conmon-036026c8f90974e2a834b3d603ea812d2096ad04492f31be6ae58128a0f3d9a6.scope: Deactivated successfully.
Dec 04 10:53:43 compute-0 podman[268325]: 2025-12-04 10:53:43.964235935 +0000 UTC m=+0.045760328 container create c9b96d630ec999ff40c5508d1ca5a808488b6f1fa4b8575604c7d005b128fd19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 04 10:53:43 compute-0 systemd[1]: Started libpod-conmon-c9b96d630ec999ff40c5508d1ca5a808488b6f1fa4b8575604c7d005b128fd19.scope.
Dec 04 10:53:44 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1af492b40607aab3b380b9c11754d41f3cd45c37895eb5e1af8682464f4b789/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1af492b40607aab3b380b9c11754d41f3cd45c37895eb5e1af8682464f4b789/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1af492b40607aab3b380b9c11754d41f3cd45c37895eb5e1af8682464f4b789/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1af492b40607aab3b380b9c11754d41f3cd45c37895eb5e1af8682464f4b789/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:44 compute-0 podman[268325]: 2025-12-04 10:53:43.943906395 +0000 UTC m=+0.025430808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:53:44 compute-0 podman[268325]: 2025-12-04 10:53:44.165813305 +0000 UTC m=+0.247337708 container init c9b96d630ec999ff40c5508d1ca5a808488b6f1fa4b8575604c7d005b128fd19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:53:44 compute-0 podman[268325]: 2025-12-04 10:53:44.173567977 +0000 UTC m=+0.255092370 container start c9b96d630ec999ff40c5508d1ca5a808488b6f1fa4b8575604c7d005b128fd19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 10:53:44 compute-0 podman[268325]: 2025-12-04 10:53:44.272730386 +0000 UTC m=+0.354254809 container attach c9b96d630ec999ff40c5508d1ca5a808488b6f1fa4b8575604c7d005b128fd19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 04 10:53:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]: {
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:     "0": [
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:         {
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "devices": [
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "/dev/loop3"
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             ],
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_name": "ceph_lv0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_size": "21470642176",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "name": "ceph_lv0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "tags": {
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.cluster_name": "ceph",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.crush_device_class": "",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.encrypted": "0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.objectstore": "bluestore",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.osd_id": "0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.type": "block",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.vdo": "0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.with_tpm": "0"
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             },
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "type": "block",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "vg_name": "ceph_vg0"
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:         }
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:     ],
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:     "1": [
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:         {
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "devices": [
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "/dev/loop4"
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             ],
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_name": "ceph_lv1",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_size": "21470642176",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "name": "ceph_lv1",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "tags": {
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.cluster_name": "ceph",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.crush_device_class": "",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.encrypted": "0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.objectstore": "bluestore",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.osd_id": "1",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.type": "block",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.vdo": "0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.with_tpm": "0"
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             },
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "type": "block",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "vg_name": "ceph_vg1"
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:         }
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:     ],
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:     "2": [
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:         {
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "devices": [
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "/dev/loop5"
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             ],
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_name": "ceph_lv2",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_size": "21470642176",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "name": "ceph_lv2",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "tags": {
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.cluster_name": "ceph",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.crush_device_class": "",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.encrypted": "0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.objectstore": "bluestore",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.osd_id": "2",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.type": "block",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.vdo": "0",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:                 "ceph.with_tpm": "0"
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             },
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "type": "block",
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:             "vg_name": "ceph_vg2"
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:         }
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]:     ]
Dec 04 10:53:44 compute-0 adoring_matsumoto[268342]: }
Dec 04 10:53:44 compute-0 systemd[1]: libpod-c9b96d630ec999ff40c5508d1ca5a808488b6f1fa4b8575604c7d005b128fd19.scope: Deactivated successfully.
Dec 04 10:53:44 compute-0 podman[268325]: 2025-12-04 10:53:44.479608708 +0000 UTC m=+0.561133101 container died c9b96d630ec999ff40c5508d1ca5a808488b6f1fa4b8575604c7d005b128fd19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:53:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1af492b40607aab3b380b9c11754d41f3cd45c37895eb5e1af8682464f4b789-merged.mount: Deactivated successfully.
Dec 04 10:53:44 compute-0 podman[268325]: 2025-12-04 10:53:44.533686849 +0000 UTC m=+0.615211242 container remove c9b96d630ec999ff40c5508d1ca5a808488b6f1fa4b8575604c7d005b128fd19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 04 10:53:44 compute-0 systemd[1]: libpod-conmon-c9b96d630ec999ff40c5508d1ca5a808488b6f1fa4b8575604c7d005b128fd19.scope: Deactivated successfully.
Dec 04 10:53:44 compute-0 sudo[268246]: pam_unix(sudo:session): session closed for user root
Dec 04 10:53:44 compute-0 sudo[268365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:53:44 compute-0 sudo[268365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:53:44 compute-0 sudo[268365]: pam_unix(sudo:session): session closed for user root
Dec 04 10:53:44 compute-0 sudo[268390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:53:44 compute-0 sudo[268390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:53:45 compute-0 podman[268427]: 2025-12-04 10:53:45.005307055 +0000 UTC m=+0.026096723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:53:45 compute-0 podman[268427]: 2025-12-04 10:53:45.310694571 +0000 UTC m=+0.331484219 container create 331b441c2c219b57b77a1b92510b2b307bc18c8fb9f5b17ea9dc324c37ca250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:53:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:48 compute-0 systemd[1]: Started libpod-conmon-331b441c2c219b57b77a1b92510b2b307bc18c8fb9f5b17ea9dc324c37ca250a.scope.
Dec 04 10:53:49 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:53:49 compute-0 ceph-mon[75358]: pgmap v1335: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:49 compute-0 podman[268427]: 2025-12-04 10:53:49.195882013 +0000 UTC m=+4.216671691 container init 331b441c2c219b57b77a1b92510b2b307bc18c8fb9f5b17ea9dc324c37ca250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:53:49 compute-0 podman[268427]: 2025-12-04 10:53:49.204224559 +0000 UTC m=+4.225014207 container start 331b441c2c219b57b77a1b92510b2b307bc18c8fb9f5b17ea9dc324c37ca250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:53:49 compute-0 objective_elgamal[268444]: 167 167
Dec 04 10:53:49 compute-0 podman[268427]: 2025-12-04 10:53:49.209476317 +0000 UTC m=+4.230265995 container attach 331b441c2c219b57b77a1b92510b2b307bc18c8fb9f5b17ea9dc324c37ca250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 04 10:53:49 compute-0 systemd[1]: libpod-331b441c2c219b57b77a1b92510b2b307bc18c8fb9f5b17ea9dc324c37ca250a.scope: Deactivated successfully.
Dec 04 10:53:49 compute-0 podman[268427]: 2025-12-04 10:53:49.209861368 +0000 UTC m=+4.230651036 container died 331b441c2c219b57b77a1b92510b2b307bc18c8fb9f5b17ea9dc324c37ca250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 04 10:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-11a9876463be5d218093a586d65a50b50f26ccfea654f826864d32a19c59c0f2-merged.mount: Deactivated successfully.
Dec 04 10:53:49 compute-0 podman[268427]: 2025-12-04 10:53:49.250871546 +0000 UTC m=+4.271661194 container remove 331b441c2c219b57b77a1b92510b2b307bc18c8fb9f5b17ea9dc324c37ca250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 10:53:49 compute-0 systemd[1]: libpod-conmon-331b441c2c219b57b77a1b92510b2b307bc18c8fb9f5b17ea9dc324c37ca250a.scope: Deactivated successfully.
Dec 04 10:53:49 compute-0 nova_compute[244644]: 2025-12-04 10:53:49.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:49 compute-0 podman[268470]: 2025-12-04 10:53:49.425900774 +0000 UTC m=+0.056359188 container create dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:53:49 compute-0 systemd[1]: Started libpod-conmon-dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811.scope.
Dec 04 10:53:49 compute-0 podman[268470]: 2025-12-04 10:53:49.406969527 +0000 UTC m=+0.037427961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:53:49 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cd2923b573a9e4ab84defa849374c5c0fcb1e7130be85e3ce574d2d87f73a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cd2923b573a9e4ab84defa849374c5c0fcb1e7130be85e3ce574d2d87f73a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cd2923b573a9e4ab84defa849374c5c0fcb1e7130be85e3ce574d2d87f73a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cd2923b573a9e4ab84defa849374c5c0fcb1e7130be85e3ce574d2d87f73a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:53:49 compute-0 podman[268470]: 2025-12-04 10:53:49.51839087 +0000 UTC m=+0.148849314 container init dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:53:49 compute-0 podman[268470]: 2025-12-04 10:53:49.526750446 +0000 UTC m=+0.157208860 container start dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:53:49 compute-0 podman[268470]: 2025-12-04 10:53:49.549394273 +0000 UTC m=+0.179852717 container attach dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 04 10:53:50 compute-0 ceph-mon[75358]: pgmap v1336: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:50 compute-0 ceph-mon[75358]: pgmap v1337: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:50 compute-0 lvm[268572]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:53:50 compute-0 lvm[268573]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:53:50 compute-0 lvm[268575]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:53:50 compute-0 lvm[268572]: VG ceph_vg0 finished
Dec 04 10:53:50 compute-0 lvm[268573]: VG ceph_vg1 finished
Dec 04 10:53:50 compute-0 lvm[268575]: VG ceph_vg2 finished
Dec 04 10:53:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:50 compute-0 podman[268563]: 2025-12-04 10:53:50.344977671 +0000 UTC m=+0.083201298 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 04 10:53:50 compute-0 lvm[268590]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:53:50 compute-0 lvm[268590]: VG ceph_vg2 finished
Dec 04 10:53:50 compute-0 priceless_ramanujan[268487]: {}
Dec 04 10:53:50 compute-0 systemd[1]: libpod-dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811.scope: Deactivated successfully.
Dec 04 10:53:50 compute-0 systemd[1]: libpod-dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811.scope: Consumed 1.494s CPU time.
Dec 04 10:53:50 compute-0 podman[268470]: 2025-12-04 10:53:50.416560373 +0000 UTC m=+1.047018827 container died dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4cd2923b573a9e4ab84defa849374c5c0fcb1e7130be85e3ce574d2d87f73a9-merged.mount: Deactivated successfully.
Dec 04 10:53:50 compute-0 podman[268470]: 2025-12-04 10:53:50.527925184 +0000 UTC m=+1.158383598 container remove dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_ramanujan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:53:50 compute-0 systemd[1]: libpod-conmon-dbd56ecbfbb03c45e58d8da61986d776a3ff38f33c5204da8a4eee73f486a811.scope: Deactivated successfully.
Dec 04 10:53:50 compute-0 sudo[268390]: pam_unix(sudo:session): session closed for user root
Dec 04 10:53:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:53:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:53:50 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:53:50 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:53:50 compute-0 sudo[268607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:53:50 compute-0 sudo[268607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:53:50 compute-0 sudo[268607]: pam_unix(sudo:session): session closed for user root
Dec 04 10:53:51 compute-0 nova_compute[244644]: 2025-12-04 10:53:51.335 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:51 compute-0 nova_compute[244644]: 2025-12-04 10:53:51.358 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:51 compute-0 nova_compute[244644]: 2025-12-04 10:53:51.358 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:53:51 compute-0 nova_compute[244644]: 2025-12-04 10:53:51.358 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:53:51 compute-0 nova_compute[244644]: 2025-12-04 10:53:51.376 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:53:51 compute-0 ceph-mon[75358]: pgmap v1338: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:53:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:53:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:52 compute-0 nova_compute[244644]: 2025-12-04 10:53:52.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:53 compute-0 nova_compute[244644]: 2025-12-04 10:53:53.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:53 compute-0 nova_compute[244644]: 2025-12-04 10:53:53.371 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:53:53 compute-0 nova_compute[244644]: 2025-12-04 10:53:53.372 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:53:53 compute-0 nova_compute[244644]: 2025-12-04 10:53:53.372 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:53:53 compute-0 nova_compute[244644]: 2025-12-04 10:53:53.372 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:53:53 compute-0 nova_compute[244644]: 2025-12-04 10:53:53.372 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:53:53 compute-0 ceph-mon[75358]: pgmap v1339: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:53:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185494787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:53:53 compute-0 nova_compute[244644]: 2025-12-04 10:53:53.979 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.184 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.185 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4775MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.185 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.186 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:53:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1185494787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.774 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.775 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.863 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing inventories for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 04 10:53:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:53:54.922 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:53:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:53:54.923 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:53:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:53:54.923 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.951 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating ProviderTree inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.952 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.970 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing aggregate associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 04 10:53:54 compute-0 nova_compute[244644]: 2025-12-04 10:53:54.995 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing trait associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, traits: COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 04 10:53:55 compute-0 nova_compute[244644]: 2025-12-04 10:53:55.023 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:53:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:53:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187728524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:53:55 compute-0 nova_compute[244644]: 2025-12-04 10:53:55.627 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:53:55 compute-0 nova_compute[244644]: 2025-12-04 10:53:55.634 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:53:55 compute-0 ceph-mon[75358]: pgmap v1340: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1187728524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:53:55 compute-0 nova_compute[244644]: 2025-12-04 10:53:55.742 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:53:55 compute-0 nova_compute[244644]: 2025-12-04 10:53:55.743 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:53:55 compute-0 nova_compute[244644]: 2025-12-04 10:53:55.743 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:53:55 compute-0 nova_compute[244644]: 2025-12-04 10:53:55.744 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:55 compute-0 nova_compute[244644]: 2025-12-04 10:53:55.744 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 04 10:53:56 compute-0 nova_compute[244644]: 2025-12-04 10:53:56.054 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 04 10:53:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:56 compute-0 nova_compute[244644]: 2025-12-04 10:53:56.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:56 compute-0 nova_compute[244644]: 2025-12-04 10:53:56.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:56 compute-0 nova_compute[244644]: 2025-12-04 10:53:56.340 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:57 compute-0 nova_compute[244644]: 2025-12-04 10:53:57.367 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:57 compute-0 nova_compute[244644]: 2025-12-04 10:53:57.368 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:57 compute-0 nova_compute[244644]: 2025-12-04 10:53:57.368 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:53:57 compute-0 sshd-session[268676]: Invalid user zimbra from 107.175.213.239 port 47886
Dec 04 10:53:57 compute-0 sshd-session[268676]: Received disconnect from 107.175.213.239 port 47886:11: Bye Bye [preauth]
Dec 04 10:53:57 compute-0 sshd-session[268676]: Disconnected from invalid user zimbra 107.175.213.239 port 47886 [preauth]
Dec 04 10:53:58 compute-0 ceph-mon[75358]: pgmap v1341: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:53:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:53:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:53:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:53:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:53:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:53:59 compute-0 nova_compute[244644]: 2025-12-04 10:53:59.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:53:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:53:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:53:59 compute-0 podman[268679]: 2025-12-04 10:53:59.952822786 +0000 UTC m=+0.054548283 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 04 10:53:59 compute-0 podman[268678]: 2025-12-04 10:53:59.988037273 +0000 UTC m=+0.092459557 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:54:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:00 compute-0 ceph-mon[75358]: pgmap v1342: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:00 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 04 10:54:00 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 04 10:54:01 compute-0 ceph-mon[75358]: pgmap v1343: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:01 compute-0 sudo[261052]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:01 compute-0 sshd-session[261035]: Received disconnect from 192.168.122.10 port 42952:11: disconnected by user
Dec 04 10:54:01 compute-0 sshd-session[261035]: Disconnected from user zuul 192.168.122.10 port 42952
Dec 04 10:54:01 compute-0 sshd-session[261016]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:54:01 compute-0 systemd-logind[798]: Session 52 logged out. Waiting for processes to exit.
Dec 04 10:54:01 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec 04 10:54:01 compute-0 systemd[1]: session-52.scope: Consumed 2min 52.527s CPU time, 864.0M memory peak, read 389.3M from disk, written 77.9M to disk.
Dec 04 10:54:01 compute-0 systemd-logind[798]: Removed session 52.
Dec 04 10:54:01 compute-0 sshd-session[268726]: Accepted publickey for zuul from 192.168.122.10 port 53134 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:54:01 compute-0 systemd-logind[798]: New session 53 of user zuul.
Dec 04 10:54:01 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec 04 10:54:01 compute-0 sshd-session[268726]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:54:01 compute-0 sudo[268730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-12-04-qlnzjbx.tar.xz
Dec 04 10:54:01 compute-0 sudo[268730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:54:02 compute-0 sudo[268730]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:02 compute-0 sshd-session[268729]: Received disconnect from 192.168.122.10 port 53134:11: disconnected by user
Dec 04 10:54:02 compute-0 sshd-session[268729]: Disconnected from user zuul 192.168.122.10 port 53134
Dec 04 10:54:02 compute-0 sshd-session[268726]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:54:02 compute-0 systemd-logind[798]: Session 53 logged out. Waiting for processes to exit.
Dec 04 10:54:02 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec 04 10:54:02 compute-0 systemd-logind[798]: Removed session 53.
Dec 04 10:54:02 compute-0 sshd-session[268755]: Accepted publickey for zuul from 192.168.122.10 port 53138 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 10:54:02 compute-0 systemd-logind[798]: New session 54 of user zuul.
Dec 04 10:54:02 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec 04 10:54:02 compute-0 sshd-session[268755]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 10:54:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:02 compute-0 sudo[268759]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 04 10:54:02 compute-0 sudo[268759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 10:54:02 compute-0 sudo[268759]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:02 compute-0 sshd-session[268758]: Received disconnect from 192.168.122.10 port 53138:11: disconnected by user
Dec 04 10:54:02 compute-0 sshd-session[268758]: Disconnected from user zuul 192.168.122.10 port 53138
Dec 04 10:54:02 compute-0 sshd-session[268755]: pam_unix(sshd:session): session closed for user zuul
Dec 04 10:54:02 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec 04 10:54:02 compute-0 systemd-logind[798]: Session 54 logged out. Waiting for processes to exit.
Dec 04 10:54:02 compute-0 systemd-logind[798]: Removed session 54.
Dec 04 10:54:03 compute-0 ceph-mon[75358]: pgmap v1344: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:05 compute-0 ceph-mon[75358]: pgmap v1345: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:54:06 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6615 writes, 30K keys, 6615 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6615 writes, 6615 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1692 writes, 8185 keys, 1692 commit groups, 1.0 writes per commit group, ingest: 10.69 MB, 0.02 MB/s
                                           Interval WAL: 1692 writes, 1692 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    108.8      0.31              0.11        16    0.019       0      0       0.0       0.0
                                             L6      1/0    8.44 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    153.3    125.7      0.91              0.29        15    0.061     73K   8424       0.0       0.0
                                            Sum      1/0    8.44 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    114.5    121.4      1.22              0.40        31    0.039     73K   8424       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    141.8    145.2      0.30              0.12         8    0.038     24K   2604       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    153.3    125.7      0.91              0.29        15    0.061     73K   8424       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    110.0      0.30              0.11        15    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.14 GB read, 0.06 MB/s read, 1.2 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56349f89b8d0#2 capacity: 304.00 MB usage: 16.87 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000186 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1265,16.29 MB,5.35839%) FilterBlock(32,211.55 KB,0.0679568%) IndexBlock(32,384.89 KB,0.123641%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 04 10:54:07 compute-0 ceph-mon[75358]: pgmap v1346: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:08 compute-0 nova_compute[244644]: 2025-12-04 10:54:08.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:08 compute-0 nova_compute[244644]: 2025-12-04 10:54:08.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 04 10:54:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:08 compute-0 ceph-mon[75358]: pgmap v1347: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:54:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4009830431' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:54:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:54:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4009830431' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:54:11 compute-0 ceph-mon[75358]: pgmap v1348: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/4009830431' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:54:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:13 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/4009830431' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:54:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:14 compute-0 ceph-mon[75358]: pgmap v1349: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:15 compute-0 ceph-mon[75358]: pgmap v1350: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:17 compute-0 ceph-mon[75358]: pgmap v1351: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:19 compute-0 ceph-mon[75358]: pgmap v1352: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:20 compute-0 podman[268784]: 2025-12-04 10:54:20.971078441 +0000 UTC m=+0.074465854 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 04 10:54:20 compute-0 ceph-mon[75358]: pgmap v1353: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:23 compute-0 ceph-mon[75358]: pgmap v1354: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:25 compute-0 ceph-mon[75358]: pgmap v1355: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:54:26
Dec 04 10:54:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:54:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:54:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', '.mgr', 'default.rgw.meta']
Dec 04 10:54:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:54:27 compute-0 ceph-mon[75358]: pgmap v1356: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:54:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:54:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:29 compute-0 ceph-mon[75358]: pgmap v1357: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:54:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:54:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:30 compute-0 podman[268807]: 2025-12-04 10:54:30.952455493 +0000 UTC m=+0.057498389 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 04 10:54:30 compute-0 podman[268806]: 2025-12-04 10:54:30.983058823 +0000 UTC m=+0.090342724 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 04 10:54:31 compute-0 ceph-mon[75358]: pgmap v1358: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:33 compute-0 ceph-mon[75358]: pgmap v1359: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:35 compute-0 ceph-mon[75358]: pgmap v1360: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.486630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845675486722, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2438, "num_deletes": 507, "total_data_size": 3492872, "memory_usage": 3563824, "flush_reason": "Manual Compaction"}
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845675514631, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3435551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28662, "largest_seqno": 31099, "table_properties": {"data_size": 3424883, "index_size": 6266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26636, "raw_average_key_size": 20, "raw_value_size": 3400719, "raw_average_value_size": 2560, "num_data_blocks": 276, "num_entries": 1328, "num_filter_entries": 1328, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845468, "oldest_key_time": 1764845468, "file_creation_time": 1764845675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 28156 microseconds, and 8787 cpu microseconds.
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.514793) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3435551 bytes OK
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.514849) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.518971) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.518997) EVENT_LOG_v1 {"time_micros": 1764845675518990, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.519017) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3481347, prev total WAL file size 3481347, number of live WAL files 2.
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.520475) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3355KB)], [62(8642KB)]
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845675520532, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12285598, "oldest_snapshot_seqno": -1}
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6168 keys, 10584858 bytes, temperature: kUnknown
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845675591752, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10584858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10541853, "index_size": 26511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 155397, "raw_average_key_size": 25, "raw_value_size": 10429559, "raw_average_value_size": 1690, "num_data_blocks": 1083, "num_entries": 6168, "num_filter_entries": 6168, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.592136) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10584858 bytes
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.594704) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.1 rd, 148.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.4 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7202, records dropped: 1034 output_compression: NoCompression
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.594727) EVENT_LOG_v1 {"time_micros": 1764845675594717, "job": 34, "event": "compaction_finished", "compaction_time_micros": 71386, "compaction_time_cpu_micros": 27333, "output_level": 6, "num_output_files": 1, "total_output_size": 10584858, "num_input_records": 7202, "num_output_records": 6168, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845675595699, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845675597561, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.520382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.597649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.597654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.597657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.597659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:54:35 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:54:35.597661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:54:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:37 compute-0 sshd-session[268852]: Invalid user support from 110.44.117.64 port 42802
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:54:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:54:37 compute-0 sshd-session[268852]: Connection closed by invalid user support 110.44.117.64 port 42802 [preauth]
Dec 04 10:54:37 compute-0 ceph-mon[75358]: pgmap v1361: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:39 compute-0 ceph-mon[75358]: pgmap v1362: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:41 compute-0 ceph-mon[75358]: pgmap v1363: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:43 compute-0 ceph-mon[75358]: pgmap v1364: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:45 compute-0 ceph-mon[75358]: pgmap v1365: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:46 compute-0 ceph-mon[75358]: pgmap v1366: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:49 compute-0 ceph-mon[75358]: pgmap v1367: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:49 compute-0 nova_compute[244644]: 2025-12-04 10:54:49.760 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:50 compute-0 sudo[268854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:54:50 compute-0 sudo[268854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:54:50 compute-0 sudo[268854]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:50 compute-0 sudo[268879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:54:50 compute-0 sudo[268879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:54:50 compute-0 ceph-mon[75358]: pgmap v1368: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:51 compute-0 sudo[268879]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:54:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:54:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:54:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:54:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:54:51 compute-0 nova_compute[244644]: 2025-12-04 10:54:51.395 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:54:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:54:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:54:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:54:51 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:54:51 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:54:51 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:54:51 compute-0 sudo[268935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:54:51 compute-0 sudo[268935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:54:51 compute-0 sudo[268935]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:51 compute-0 sudo[268966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:54:51 compute-0 sudo[268966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:54:51 compute-0 podman[268959]: 2025-12-04 10:54:51.735898263 +0000 UTC m=+0.060513193 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 04 10:54:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:54:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:54:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:54:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:54:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:54:51 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:54:52 compute-0 podman[269018]: 2025-12-04 10:54:52.044182482 +0000 UTC m=+0.046745666 container create 3666390e03fae5495556233df51e368e2a2f049ae087f9e8e373ce0dd8b58b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 10:54:52 compute-0 systemd[1]: Started libpod-conmon-3666390e03fae5495556233df51e368e2a2f049ae087f9e8e373ce0dd8b58b03.scope.
Dec 04 10:54:52 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:54:52 compute-0 podman[269018]: 2025-12-04 10:54:52.022117541 +0000 UTC m=+0.024680745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:54:52 compute-0 podman[269018]: 2025-12-04 10:54:52.290212085 +0000 UTC m=+0.292775289 container init 3666390e03fae5495556233df51e368e2a2f049ae087f9e8e373ce0dd8b58b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 04 10:54:52 compute-0 podman[269018]: 2025-12-04 10:54:52.299613896 +0000 UTC m=+0.302177080 container start 3666390e03fae5495556233df51e368e2a2f049ae087f9e8e373ce0dd8b58b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:54:52 compute-0 peaceful_leavitt[269035]: 167 167
Dec 04 10:54:52 compute-0 systemd[1]: libpod-3666390e03fae5495556233df51e368e2a2f049ae087f9e8e373ce0dd8b58b03.scope: Deactivated successfully.
Dec 04 10:54:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:52 compute-0 podman[269018]: 2025-12-04 10:54:52.363402938 +0000 UTC m=+0.365966212 container attach 3666390e03fae5495556233df51e368e2a2f049ae087f9e8e373ce0dd8b58b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 04 10:54:52 compute-0 podman[269018]: 2025-12-04 10:54:52.364256938 +0000 UTC m=+0.366820152 container died 3666390e03fae5495556233df51e368e2a2f049ae087f9e8e373ce0dd8b58b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:54:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-69ad5375dcf227027a5e70148c4bf34be8c44151d1b9897fd7f9d07337dc5956-merged.mount: Deactivated successfully.
Dec 04 10:54:52 compute-0 podman[269018]: 2025-12-04 10:54:52.574971268 +0000 UTC m=+0.577534492 container remove 3666390e03fae5495556233df51e368e2a2f049ae087f9e8e373ce0dd8b58b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 04 10:54:52 compute-0 systemd[1]: libpod-conmon-3666390e03fae5495556233df51e368e2a2f049ae087f9e8e373ce0dd8b58b03.scope: Deactivated successfully.
Dec 04 10:54:52 compute-0 podman[269061]: 2025-12-04 10:54:52.745146204 +0000 UTC m=+0.046842207 container create 8ecd162cf6774e4748ce98fa22f1d31f39b9f32f951f1cb00b46f2c891567e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Dec 04 10:54:52 compute-0 systemd[1]: Started libpod-conmon-8ecd162cf6774e4748ce98fa22f1d31f39b9f32f951f1cb00b46f2c891567e9c.scope.
Dec 04 10:54:52 compute-0 podman[269061]: 2025-12-04 10:54:52.721782053 +0000 UTC m=+0.023478086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:54:52 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9397a9443d245593f199a4a229cd25e83091a7591b50cd973089e868fac84fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9397a9443d245593f199a4a229cd25e83091a7591b50cd973089e868fac84fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9397a9443d245593f199a4a229cd25e83091a7591b50cd973089e868fac84fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9397a9443d245593f199a4a229cd25e83091a7591b50cd973089e868fac84fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9397a9443d245593f199a4a229cd25e83091a7591b50cd973089e868fac84fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:52 compute-0 podman[269061]: 2025-12-04 10:54:52.838671235 +0000 UTC m=+0.140367268 container init 8ecd162cf6774e4748ce98fa22f1d31f39b9f32f951f1cb00b46f2c891567e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:54:52 compute-0 podman[269061]: 2025-12-04 10:54:52.845755298 +0000 UTC m=+0.147451301 container start 8ecd162cf6774e4748ce98fa22f1d31f39b9f32f951f1cb00b46f2c891567e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:54:52 compute-0 podman[269061]: 2025-12-04 10:54:52.898455939 +0000 UTC m=+0.200151942 container attach 8ecd162cf6774e4748ce98fa22f1d31f39b9f32f951f1cb00b46f2c891567e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:54:53 compute-0 ceph-mon[75358]: pgmap v1369: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:53 compute-0 keen_pike[269077]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:54:53 compute-0 keen_pike[269077]: --> All data devices are unavailable
Dec 04 10:54:53 compute-0 systemd[1]: libpod-8ecd162cf6774e4748ce98fa22f1d31f39b9f32f951f1cb00b46f2c891567e9c.scope: Deactivated successfully.
Dec 04 10:54:53 compute-0 podman[269061]: 2025-12-04 10:54:53.302944363 +0000 UTC m=+0.604640366 container died 8ecd162cf6774e4748ce98fa22f1d31f39b9f32f951f1cb00b46f2c891567e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.341 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.341 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.355 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.356 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.374 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.375 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.375 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.375 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.375 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:54:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9397a9443d245593f199a4a229cd25e83091a7591b50cd973089e868fac84fb-merged.mount: Deactivated successfully.
Dec 04 10:54:53 compute-0 podman[269061]: 2025-12-04 10:54:53.758261482 +0000 UTC m=+1.059957495 container remove 8ecd162cf6774e4748ce98fa22f1d31f39b9f32f951f1cb00b46f2c891567e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 04 10:54:53 compute-0 sudo[268966]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:53 compute-0 sudo[269126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:54:53 compute-0 systemd[1]: libpod-conmon-8ecd162cf6774e4748ce98fa22f1d31f39b9f32f951f1cb00b46f2c891567e9c.scope: Deactivated successfully.
Dec 04 10:54:53 compute-0 sudo[269126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:54:53 compute-0 sudo[269126]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:54:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/197825658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:54:53 compute-0 sudo[269152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:54:53 compute-0 sudo[269152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:54:53 compute-0 nova_compute[244644]: 2025-12-04 10:54:53.943 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:54:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/197825658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.118 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.120 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4926MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.120 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.121 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.208 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.208 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.232 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:54:54 compute-0 podman[269193]: 2025-12-04 10:54:54.245194595 +0000 UTC m=+0.039436687 container create 85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 04 10:54:54 compute-0 systemd[1]: Started libpod-conmon-85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9.scope.
Dec 04 10:54:54 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:54:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:54 compute-0 podman[269193]: 2025-12-04 10:54:54.227832059 +0000 UTC m=+0.022074181 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:54:54 compute-0 podman[269193]: 2025-12-04 10:54:54.325259685 +0000 UTC m=+0.119501797 container init 85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:54:54 compute-0 podman[269193]: 2025-12-04 10:54:54.331434936 +0000 UTC m=+0.125677028 container start 85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 04 10:54:54 compute-0 confident_dirac[269211]: 167 167
Dec 04 10:54:54 compute-0 systemd[1]: libpod-85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9.scope: Deactivated successfully.
Dec 04 10:54:54 compute-0 podman[269193]: 2025-12-04 10:54:54.336331726 +0000 UTC m=+0.130573848 container attach 85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:54:54 compute-0 conmon[269211]: conmon 85fed64f95573dc1b05d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9.scope/container/memory.events
Dec 04 10:54:54 compute-0 podman[269193]: 2025-12-04 10:54:54.339734319 +0000 UTC m=+0.133976411 container died 85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1d305c86c9b18b50839cc873bd81352d3777c0f04e673188f8c6f3190340022-merged.mount: Deactivated successfully.
Dec 04 10:54:54 compute-0 podman[269193]: 2025-12-04 10:54:54.382788524 +0000 UTC m=+0.177030616 container remove 85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:54:54 compute-0 systemd[1]: libpod-conmon-85fed64f95573dc1b05dd0b5168a65dad430b64b3c789d4bab62c922d49438c9.scope: Deactivated successfully.
Dec 04 10:54:54 compute-0 podman[269252]: 2025-12-04 10:54:54.561842318 +0000 UTC m=+0.040193145 container create 4f79e9c4b0a2dcfb11b295518c39c5dadce31f14663fc1b7d5293c52ce259819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_johnson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:54:54 compute-0 systemd[1]: Started libpod-conmon-4f79e9c4b0a2dcfb11b295518c39c5dadce31f14663fc1b7d5293c52ce259819.scope.
Dec 04 10:54:54 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3e320b81cb44eb198e53a1c721181b1eb68ee5321d84694e89060b44ef0a04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3e320b81cb44eb198e53a1c721181b1eb68ee5321d84694e89060b44ef0a04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:54 compute-0 podman[269252]: 2025-12-04 10:54:54.542967495 +0000 UTC m=+0.021318122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3e320b81cb44eb198e53a1c721181b1eb68ee5321d84694e89060b44ef0a04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3e320b81cb44eb198e53a1c721181b1eb68ee5321d84694e89060b44ef0a04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:54 compute-0 podman[269252]: 2025-12-04 10:54:54.658954306 +0000 UTC m=+0.137304943 container init 4f79e9c4b0a2dcfb11b295518c39c5dadce31f14663fc1b7d5293c52ce259819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_johnson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:54:54 compute-0 podman[269252]: 2025-12-04 10:54:54.667050664 +0000 UTC m=+0.145401271 container start 4f79e9c4b0a2dcfb11b295518c39c5dadce31f14663fc1b7d5293c52ce259819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:54:54 compute-0 podman[269252]: 2025-12-04 10:54:54.670694364 +0000 UTC m=+0.149044991 container attach 4f79e9c4b0a2dcfb11b295518c39c5dadce31f14663fc1b7d5293c52ce259819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_johnson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:54:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:54:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291298816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.819 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.828 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.845 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.847 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:54:54 compute-0 nova_compute[244644]: 2025-12-04 10:54:54.848 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:54:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:54:54.923 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:54:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:54:54.924 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:54:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:54:54.924 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]: {
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:     "0": [
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:         {
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "devices": [
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "/dev/loop3"
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             ],
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_name": "ceph_lv0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_size": "21470642176",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "name": "ceph_lv0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "tags": {
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.cluster_name": "ceph",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.crush_device_class": "",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.encrypted": "0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.objectstore": "bluestore",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.osd_id": "0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.type": "block",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.vdo": "0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.with_tpm": "0"
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             },
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "type": "block",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "vg_name": "ceph_vg0"
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:         }
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:     ],
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:     "1": [
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:         {
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "devices": [
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "/dev/loop4"
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             ],
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_name": "ceph_lv1",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_size": "21470642176",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "name": "ceph_lv1",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "tags": {
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.cluster_name": "ceph",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.crush_device_class": "",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.encrypted": "0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.objectstore": "bluestore",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.osd_id": "1",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.type": "block",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.vdo": "0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.with_tpm": "0"
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             },
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "type": "block",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "vg_name": "ceph_vg1"
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:         }
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:     ],
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:     "2": [
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:         {
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "devices": [
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "/dev/loop5"
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             ],
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_name": "ceph_lv2",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_size": "21470642176",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "name": "ceph_lv2",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "tags": {
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.cluster_name": "ceph",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.crush_device_class": "",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.encrypted": "0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.objectstore": "bluestore",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.osd_id": "2",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.type": "block",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.vdo": "0",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:                 "ceph.with_tpm": "0"
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             },
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "type": "block",
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:             "vg_name": "ceph_vg2"
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:         }
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]:     ]
Dec 04 10:54:54 compute-0 dreamy_johnson[269269]: }
Dec 04 10:54:54 compute-0 systemd[1]: libpod-4f79e9c4b0a2dcfb11b295518c39c5dadce31f14663fc1b7d5293c52ce259819.scope: Deactivated successfully.
Dec 04 10:54:54 compute-0 podman[269252]: 2025-12-04 10:54:54.969716765 +0000 UTC m=+0.448067372 container died 4f79e9c4b0a2dcfb11b295518c39c5dadce31f14663fc1b7d5293c52ce259819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_johnson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b3e320b81cb44eb198e53a1c721181b1eb68ee5321d84694e89060b44ef0a04-merged.mount: Deactivated successfully.
Dec 04 10:54:55 compute-0 podman[269252]: 2025-12-04 10:54:55.021733178 +0000 UTC m=+0.500083785 container remove 4f79e9c4b0a2dcfb11b295518c39c5dadce31f14663fc1b7d5293c52ce259819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:54:55 compute-0 systemd[1]: libpod-conmon-4f79e9c4b0a2dcfb11b295518c39c5dadce31f14663fc1b7d5293c52ce259819.scope: Deactivated successfully.
Dec 04 10:54:55 compute-0 sudo[269152]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:55 compute-0 ceph-mon[75358]: pgmap v1370: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4291298816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:54:55 compute-0 sudo[269291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:54:55 compute-0 sudo[269291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:54:55 compute-0 sudo[269291]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:55 compute-0 sudo[269316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:54:55 compute-0 sudo[269316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:54:55 compute-0 podman[269353]: 2025-12-04 10:54:55.52328726 +0000 UTC m=+0.040738759 container create ab05b89e4412bd8d88ba16df8bc5d7d11a7610c736ca4f10d56a440bd5cdf237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:54:55 compute-0 systemd[1]: Started libpod-conmon-ab05b89e4412bd8d88ba16df8bc5d7d11a7610c736ca4f10d56a440bd5cdf237.scope.
Dec 04 10:54:55 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:54:55 compute-0 podman[269353]: 2025-12-04 10:54:55.506659333 +0000 UTC m=+0.024110852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:54:55 compute-0 podman[269353]: 2025-12-04 10:54:55.603469223 +0000 UTC m=+0.120920742 container init ab05b89e4412bd8d88ba16df8bc5d7d11a7610c736ca4f10d56a440bd5cdf237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:54:55 compute-0 podman[269353]: 2025-12-04 10:54:55.610165037 +0000 UTC m=+0.127616536 container start ab05b89e4412bd8d88ba16df8bc5d7d11a7610c736ca4f10d56a440bd5cdf237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moore, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 10:54:55 compute-0 podman[269353]: 2025-12-04 10:54:55.614393341 +0000 UTC m=+0.131844860 container attach ab05b89e4412bd8d88ba16df8bc5d7d11a7610c736ca4f10d56a440bd5cdf237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 04 10:54:55 compute-0 inspiring_moore[269370]: 167 167
Dec 04 10:54:55 compute-0 systemd[1]: libpod-ab05b89e4412bd8d88ba16df8bc5d7d11a7610c736ca4f10d56a440bd5cdf237.scope: Deactivated successfully.
Dec 04 10:54:55 compute-0 podman[269353]: 2025-12-04 10:54:55.615748034 +0000 UTC m=+0.133199533 container died ab05b89e4412bd8d88ba16df8bc5d7d11a7610c736ca4f10d56a440bd5cdf237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 04 10:54:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-50ce66bb047952aad68c8414cdff6b76bfee952bcb09e96a4723a59bc2189109-merged.mount: Deactivated successfully.
Dec 04 10:54:55 compute-0 podman[269353]: 2025-12-04 10:54:55.655720892 +0000 UTC m=+0.173172391 container remove ab05b89e4412bd8d88ba16df8bc5d7d11a7610c736ca4f10d56a440bd5cdf237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moore, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:54:55 compute-0 systemd[1]: libpod-conmon-ab05b89e4412bd8d88ba16df8bc5d7d11a7610c736ca4f10d56a440bd5cdf237.scope: Deactivated successfully.
Dec 04 10:54:55 compute-0 podman[269394]: 2025-12-04 10:54:55.820530108 +0000 UTC m=+0.043895215 container create ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:54:55 compute-0 nova_compute[244644]: 2025-12-04 10:54:55.831 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:55 compute-0 systemd[1]: Started libpod-conmon-ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46.scope.
Dec 04 10:54:55 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a275a742c8c903f50a2c7a089b387c0a389028efb04fc469f8dea7e897564b77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a275a742c8c903f50a2c7a089b387c0a389028efb04fc469f8dea7e897564b77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a275a742c8c903f50a2c7a089b387c0a389028efb04fc469f8dea7e897564b77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a275a742c8c903f50a2c7a089b387c0a389028efb04fc469f8dea7e897564b77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:54:55 compute-0 podman[269394]: 2025-12-04 10:54:55.799463842 +0000 UTC m=+0.022828959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:54:55 compute-0 podman[269394]: 2025-12-04 10:54:55.900592388 +0000 UTC m=+0.123957495 container init ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:54:55 compute-0 podman[269394]: 2025-12-04 10:54:55.911747052 +0000 UTC m=+0.135112159 container start ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 04 10:54:55 compute-0 podman[269394]: 2025-12-04 10:54:55.915589876 +0000 UTC m=+0.138954983 container attach ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:54:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:56 compute-0 nova_compute[244644]: 2025-12-04 10:54:56.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:56 compute-0 lvm[269486]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:54:56 compute-0 lvm[269486]: VG ceph_vg0 finished
Dec 04 10:54:56 compute-0 lvm[269489]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:54:56 compute-0 lvm[269489]: VG ceph_vg1 finished
Dec 04 10:54:56 compute-0 lvm[269491]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:54:56 compute-0 lvm[269491]: VG ceph_vg2 finished
Dec 04 10:54:56 compute-0 hungry_varahamihira[269410]: {}
Dec 04 10:54:56 compute-0 systemd[1]: libpod-ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46.scope: Deactivated successfully.
Dec 04 10:54:56 compute-0 podman[269394]: 2025-12-04 10:54:56.801439146 +0000 UTC m=+1.024804273 container died ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:54:56 compute-0 systemd[1]: libpod-ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46.scope: Consumed 1.401s CPU time.
Dec 04 10:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a275a742c8c903f50a2c7a089b387c0a389028efb04fc469f8dea7e897564b77-merged.mount: Deactivated successfully.
Dec 04 10:54:56 compute-0 podman[269394]: 2025-12-04 10:54:56.986300093 +0000 UTC m=+1.209665200 container remove ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 04 10:54:56 compute-0 systemd[1]: libpod-conmon-ab766e9118c05f04e30f747a669bf974bf090c349a29f3ecd2db07e2a9ed1b46.scope: Deactivated successfully.
Dec 04 10:54:57 compute-0 sudo[269316]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:54:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:54:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:54:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:54:57 compute-0 sudo[269507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:54:57 compute-0 sudo[269507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:54:57 compute-0 sudo[269507]: pam_unix(sudo:session): session closed for user root
Dec 04 10:54:57 compute-0 nova_compute[244644]: 2025-12-04 10:54:57.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:57 compute-0 ceph-mon[75358]: pgmap v1371: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:54:57 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:54:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:54:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:54:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:58 compute-0 nova_compute[244644]: 2025-12-04 10:54:58.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:58 compute-0 nova_compute[244644]: 2025-12-04 10:54:58.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:54:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:54:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:54:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:54:59 compute-0 nova_compute[244644]: 2025-12-04 10:54:59.334 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:54:59 compute-0 ceph-mon[75358]: pgmap v1372: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:54:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:54:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:55:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:01 compute-0 nova_compute[244644]: 2025-12-04 10:55:01.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:55:01 compute-0 ceph-mon[75358]: pgmap v1373: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:01 compute-0 podman[269533]: 2025-12-04 10:55:01.967357208 +0000 UTC m=+0.065859543 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 04 10:55:02 compute-0 podman[269532]: 2025-12-04 10:55:02.012989186 +0000 UTC m=+0.107953775 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller)
Dec 04 10:55:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:03 compute-0 ceph-mon[75358]: pgmap v1374: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:05 compute-0 ceph-mon[75358]: pgmap v1375: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:07 compute-0 ceph-mon[75358]: pgmap v1376: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:09 compute-0 ceph-mon[75358]: pgmap v1377: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:55:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036199202' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:55:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:55:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036199202' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:55:11 compute-0 ceph-mon[75358]: pgmap v1378: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1036199202' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:55:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1036199202' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:55:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:13 compute-0 ceph-mon[75358]: pgmap v1379: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:15 compute-0 ceph-mon[75358]: pgmap v1380: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:16 compute-0 ceph-mon[75358]: pgmap v1381: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:19 compute-0 ceph-mon[75358]: pgmap v1382: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:21 compute-0 ceph-mon[75358]: pgmap v1383: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:21 compute-0 podman[269576]: 2025-12-04 10:55:21.953983167 +0000 UTC m=+0.062193405 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 04 10:55:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:55:22 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2688 syncs, 3.77 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1978 writes, 5639 keys, 1978 commit groups, 1.0 writes per commit group, ingest: 6.38 MB, 0.01 MB/s
                                           Interval WAL: 1978 writes, 715 syncs, 2.77 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 10:55:23 compute-0 ceph-mon[75358]: pgmap v1384: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:25 compute-0 ceph-mon[75358]: pgmap v1385: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:55:26
Dec 04 10:55:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:55:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:55:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'backups', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr']
Dec 04 10:55:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:55:27 compute-0 ceph-mon[75358]: pgmap v1386: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:55:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:55:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:55:29 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 4009 syncs, 3.38 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3149 writes, 9876 keys, 3149 commit groups, 1.0 writes per commit group, ingest: 14.97 MB, 0.02 MB/s
                                           Interval WAL: 3149 writes, 1202 syncs, 2.62 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 10:55:29 compute-0 ceph-mon[75358]: pgmap v1387: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:55:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:55:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:31 compute-0 ceph-mon[75358]: pgmap v1388: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:32 compute-0 podman[269597]: 2025-12-04 10:55:32.942992671 +0000 UTC m=+0.047678338 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:55:32 compute-0 podman[269596]: 2025-12-04 10:55:32.962857438 +0000 UTC m=+0.071107612 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:55:33 compute-0 ceph-mon[75358]: pgmap v1389: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:35 compute-0 ceph-mon[75358]: pgmap v1390: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:55:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:55:37 compute-0 ceph-mon[75358]: pgmap v1391: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 10:55:37 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2835 syncs, 3.75 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1646 writes, 3739 keys, 1646 commit groups, 1.0 writes per commit group, ingest: 2.53 MB, 0.00 MB/s
                                           Interval WAL: 1646 writes, 515 syncs, 3.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 10:55:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:39 compute-0 ceph-mon[75358]: pgmap v1392: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:41 compute-0 ceph-mon[75358]: pgmap v1393: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:43 compute-0 ceph-mon[75358]: pgmap v1394: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:43 compute-0 ceph-mgr[75651]: [devicehealth INFO root] Check health
Dec 04 10:55:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:45 compute-0 ceph-mon[75358]: pgmap v1395: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:47 compute-0 ceph-mon[75358]: pgmap v1396: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:48 compute-0 sshd-session[269641]: Invalid user mega from 107.175.213.239 port 50730
Dec 04 10:55:48 compute-0 sshd-session[269641]: Received disconnect from 107.175.213.239 port 50730:11: Bye Bye [preauth]
Dec 04 10:55:48 compute-0 sshd-session[269641]: Disconnected from invalid user mega 107.175.213.239 port 50730 [preauth]
Dec 04 10:55:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:49 compute-0 ceph-mon[75358]: pgmap v1397: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:51 compute-0 ceph-mon[75358]: pgmap v1398: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:52 compute-0 nova_compute[244644]: 2025-12-04 10:55:52.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:55:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:52 compute-0 podman[269643]: 2025-12-04 10:55:52.956817207 +0000 UTC m=+0.061068296 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.355 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.356 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.382 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.383 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.383 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.383 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.384 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:55:53 compute-0 ceph-mon[75358]: pgmap v1399: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:55:53 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679214609' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:55:53 compute-0 nova_compute[244644]: 2025-12-04 10:55:53.994 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.176 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.178 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4938MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.178 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.179 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.242 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.243 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.260 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:55:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3679214609' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:55:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:55:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022097015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.821 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.827 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.841 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.843 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:55:54 compute-0 nova_compute[244644]: 2025-12-04 10:55:54.843 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:55:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:55:54.924 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:55:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:55:54.925 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:55:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:55:54.926 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:55:55 compute-0 ceph-mon[75358]: pgmap v1400: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4022097015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:55:55 compute-0 nova_compute[244644]: 2025-12-04 10:55:55.825 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:55:55 compute-0 nova_compute[244644]: 2025-12-04 10:55:55.844 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:55:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:57 compute-0 sudo[269709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:55:57 compute-0 sudo[269709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:55:57 compute-0 sudo[269709]: pam_unix(sudo:session): session closed for user root
Dec 04 10:55:57 compute-0 sudo[269734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:55:57 compute-0 sudo[269734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:55:57 compute-0 ceph-mon[75358]: pgmap v1401: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:57 compute-0 sudo[269734]: pam_unix(sudo:session): session closed for user root
Dec 04 10:55:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:55:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:55:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:55:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:55:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:55:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:55:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:55:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:55:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:55:57 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:55:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:55:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:55:57 compute-0 sudo[269790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:55:57 compute-0 sudo[269790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:55:57 compute-0 sudo[269790]: pam_unix(sudo:session): session closed for user root
Dec 04 10:55:57 compute-0 sudo[269815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:55:57 compute-0 sudo[269815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:55:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:55:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:55:58 compute-0 podman[269852]: 2025-12-04 10:55:58.182150192 +0000 UTC m=+0.040779889 container create 74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:55:58 compute-0 systemd[1]: Started libpod-conmon-74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd.scope.
Dec 04 10:55:58 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:55:58 compute-0 podman[269852]: 2025-12-04 10:55:58.260286676 +0000 UTC m=+0.118916403 container init 74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:55:58 compute-0 podman[269852]: 2025-12-04 10:55:58.16609067 +0000 UTC m=+0.024720387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:55:58 compute-0 podman[269852]: 2025-12-04 10:55:58.266715754 +0000 UTC m=+0.125345451 container start 74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:55:58 compute-0 podman[269852]: 2025-12-04 10:55:58.270609289 +0000 UTC m=+0.129238986 container attach 74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 04 10:55:58 compute-0 amazing_feynman[269868]: 167 167
Dec 04 10:55:58 compute-0 systemd[1]: libpod-74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd.scope: Deactivated successfully.
Dec 04 10:55:58 compute-0 conmon[269868]: conmon 74859f004836833a8ba7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd.scope/container/memory.events
Dec 04 10:55:58 compute-0 podman[269852]: 2025-12-04 10:55:58.273272124 +0000 UTC m=+0.131901821 container died 74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f54ece22e3d7e514447140de5c699b1e77217415e01274f5a0fe969557d31f1-merged.mount: Deactivated successfully.
Dec 04 10:55:58 compute-0 podman[269852]: 2025-12-04 10:55:58.314892243 +0000 UTC m=+0.173521940 container remove 74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_feynman, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:55:58 compute-0 systemd[1]: libpod-conmon-74859f004836833a8ba772bfad8126a3847a4b7dde54f7df600a6f7b8a985edd.scope: Deactivated successfully.
Dec 04 10:55:58 compute-0 nova_compute[244644]: 2025-12-04 10:55:58.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:55:58 compute-0 nova_compute[244644]: 2025-12-04 10:55:58.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:55:58 compute-0 nova_compute[244644]: 2025-12-04 10:55:58.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:55:58 compute-0 nova_compute[244644]: 2025-12-04 10:55:58.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:55:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:55:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:55:58 compute-0 podman[269892]: 2025-12-04 10:55:58.465563873 +0000 UTC m=+0.039684564 container create 06dfe87ac149169e98c28782547f874228dafec7be4e12254a6eb65bf85e7652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_lederberg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:55:58 compute-0 systemd[1]: Started libpod-conmon-06dfe87ac149169e98c28782547f874228dafec7be4e12254a6eb65bf85e7652.scope.
Dec 04 10:55:58 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfd6f3d0a0d3f19f16a17890517912be4f3d4c9b1f0b032891cfd7cfc90ada7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfd6f3d0a0d3f19f16a17890517912be4f3d4c9b1f0b032891cfd7cfc90ada7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfd6f3d0a0d3f19f16a17890517912be4f3d4c9b1f0b032891cfd7cfc90ada7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfd6f3d0a0d3f19f16a17890517912be4f3d4c9b1f0b032891cfd7cfc90ada7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfd6f3d0a0d3f19f16a17890517912be4f3d4c9b1f0b032891cfd7cfc90ada7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:55:58 compute-0 podman[269892]: 2025-12-04 10:55:58.447698285 +0000 UTC m=+0.021818996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:55:58 compute-0 podman[269892]: 2025-12-04 10:55:58.544345071 +0000 UTC m=+0.118465782 container init 06dfe87ac149169e98c28782547f874228dafec7be4e12254a6eb65bf85e7652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_lederberg, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:55:58 compute-0 podman[269892]: 2025-12-04 10:55:58.551744802 +0000 UTC m=+0.125865493 container start 06dfe87ac149169e98c28782547f874228dafec7be4e12254a6eb65bf85e7652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:55:58 compute-0 podman[269892]: 2025-12-04 10:55:58.555486244 +0000 UTC m=+0.129606955 container attach 06dfe87ac149169e98c28782547f874228dafec7be4e12254a6eb65bf85e7652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_lederberg, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 10:55:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:55:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:55:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:55:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:55:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:55:58 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:55:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:55:58 compute-0 stoic_lederberg[269908]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:55:58 compute-0 stoic_lederberg[269908]: --> All data devices are unavailable
Dec 04 10:55:59 compute-0 systemd[1]: libpod-06dfe87ac149169e98c28782547f874228dafec7be4e12254a6eb65bf85e7652.scope: Deactivated successfully.
Dec 04 10:55:59 compute-0 podman[269892]: 2025-12-04 10:55:59.007682187 +0000 UTC m=+0.581802878 container died 06dfe87ac149169e98c28782547f874228dafec7be4e12254a6eb65bf85e7652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_lederberg, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dfd6f3d0a0d3f19f16a17890517912be4f3d4c9b1f0b032891cfd7cfc90ada7-merged.mount: Deactivated successfully.
Dec 04 10:55:59 compute-0 podman[269892]: 2025-12-04 10:55:59.057832865 +0000 UTC m=+0.631953556 container remove 06dfe87ac149169e98c28782547f874228dafec7be4e12254a6eb65bf85e7652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_lederberg, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:55:59 compute-0 systemd[1]: libpod-conmon-06dfe87ac149169e98c28782547f874228dafec7be4e12254a6eb65bf85e7652.scope: Deactivated successfully.
Dec 04 10:55:59 compute-0 sudo[269815]: pam_unix(sudo:session): session closed for user root
Dec 04 10:55:59 compute-0 sudo[269939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:55:59 compute-0 sudo[269939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:55:59 compute-0 sudo[269939]: pam_unix(sudo:session): session closed for user root
Dec 04 10:55:59 compute-0 sudo[269964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:55:59 compute-0 sudo[269964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:55:59 compute-0 podman[270000]: 2025-12-04 10:55:59.503597469 +0000 UTC m=+0.038645386 container create f19a2b148a0d31c7225b24b66dc0ec4494e91540fcb7dc53cd11d5b8fb76317d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:55:59 compute-0 systemd[1]: Started libpod-conmon-f19a2b148a0d31c7225b24b66dc0ec4494e91540fcb7dc53cd11d5b8fb76317d.scope.
Dec 04 10:55:59 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:55:59 compute-0 podman[270000]: 2025-12-04 10:55:59.485975119 +0000 UTC m=+0.021023056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:55:59 compute-0 podman[270000]: 2025-12-04 10:55:59.588184091 +0000 UTC m=+0.123232028 container init f19a2b148a0d31c7225b24b66dc0ec4494e91540fcb7dc53cd11d5b8fb76317d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:55:59 compute-0 podman[270000]: 2025-12-04 10:55:59.596195337 +0000 UTC m=+0.131243254 container start f19a2b148a0d31c7225b24b66dc0ec4494e91540fcb7dc53cd11d5b8fb76317d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 10:55:59 compute-0 podman[270000]: 2025-12-04 10:55:59.600565574 +0000 UTC m=+0.135613491 container attach f19a2b148a0d31c7225b24b66dc0ec4494e91540fcb7dc53cd11d5b8fb76317d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:55:59 compute-0 zen_varahamihira[270017]: 167 167
Dec 04 10:55:59 compute-0 systemd[1]: libpod-f19a2b148a0d31c7225b24b66dc0ec4494e91540fcb7dc53cd11d5b8fb76317d.scope: Deactivated successfully.
Dec 04 10:55:59 compute-0 podman[270000]: 2025-12-04 10:55:59.604432179 +0000 UTC m=+0.139480096 container died f19a2b148a0d31c7225b24b66dc0ec4494e91540fcb7dc53cd11d5b8fb76317d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-653a7e6153b337b84a67e6809fe37c643a181843c739ac1aeb5838fec267b39e-merged.mount: Deactivated successfully.
Dec 04 10:55:59 compute-0 podman[270000]: 2025-12-04 10:55:59.646357175 +0000 UTC m=+0.181405092 container remove f19a2b148a0d31c7225b24b66dc0ec4494e91540fcb7dc53cd11d5b8fb76317d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:55:59 compute-0 systemd[1]: libpod-conmon-f19a2b148a0d31c7225b24b66dc0ec4494e91540fcb7dc53cd11d5b8fb76317d.scope: Deactivated successfully.
Dec 04 10:55:59 compute-0 ceph-mon[75358]: pgmap v1402: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:55:59 compute-0 podman[270041]: 2025-12-04 10:55:59.801612957 +0000 UTC m=+0.041953099 container create d09fdb0f6247d4d3521c18767347e47cab529c11ad9cbb9a9f700c07d33eb289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 04 10:55:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:55:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:55:59 compute-0 systemd[1]: Started libpod-conmon-d09fdb0f6247d4d3521c18767347e47cab529c11ad9cbb9a9f700c07d33eb289.scope.
Dec 04 10:55:59 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ba5dfa8f3991a6567cbcbe8d26f213d7252c1aa22b11a53c3e07e2c6562307/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ba5dfa8f3991a6567cbcbe8d26f213d7252c1aa22b11a53c3e07e2c6562307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ba5dfa8f3991a6567cbcbe8d26f213d7252c1aa22b11a53c3e07e2c6562307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ba5dfa8f3991a6567cbcbe8d26f213d7252c1aa22b11a53c3e07e2c6562307/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:55:59 compute-0 podman[270041]: 2025-12-04 10:55:59.782882679 +0000 UTC m=+0.023222841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:55:59 compute-0 podman[270041]: 2025-12-04 10:55:59.883066311 +0000 UTC m=+0.123406483 container init d09fdb0f6247d4d3521c18767347e47cab529c11ad9cbb9a9f700c07d33eb289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_booth, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 04 10:55:59 compute-0 podman[270041]: 2025-12-04 10:55:59.889351535 +0000 UTC m=+0.129691677 container start d09fdb0f6247d4d3521c18767347e47cab529c11ad9cbb9a9f700c07d33eb289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 04 10:55:59 compute-0 podman[270041]: 2025-12-04 10:55:59.892716227 +0000 UTC m=+0.133056389 container attach d09fdb0f6247d4d3521c18767347e47cab529c11ad9cbb9a9f700c07d33eb289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Dec 04 10:56:00 compute-0 wonderful_booth[270057]: {
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:     "0": [
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:         {
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "devices": [
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "/dev/loop3"
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             ],
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_name": "ceph_lv0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_size": "21470642176",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "name": "ceph_lv0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "tags": {
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.cluster_name": "ceph",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.crush_device_class": "",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.encrypted": "0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.objectstore": "bluestore",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.osd_id": "0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.type": "block",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.vdo": "0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.with_tpm": "0"
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             },
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "type": "block",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "vg_name": "ceph_vg0"
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:         }
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:     ],
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:     "1": [
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:         {
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "devices": [
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "/dev/loop4"
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             ],
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_name": "ceph_lv1",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_size": "21470642176",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "name": "ceph_lv1",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "tags": {
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.cluster_name": "ceph",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.crush_device_class": "",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.encrypted": "0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.objectstore": "bluestore",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.osd_id": "1",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.type": "block",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.vdo": "0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.with_tpm": "0"
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             },
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "type": "block",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "vg_name": "ceph_vg1"
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:         }
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:     ],
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:     "2": [
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:         {
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "devices": [
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "/dev/loop5"
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             ],
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_name": "ceph_lv2",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_size": "21470642176",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "name": "ceph_lv2",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "tags": {
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.cluster_name": "ceph",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.crush_device_class": "",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.encrypted": "0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.objectstore": "bluestore",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.osd_id": "2",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.type": "block",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.vdo": "0",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:                 "ceph.with_tpm": "0"
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             },
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "type": "block",
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:             "vg_name": "ceph_vg2"
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:         }
Dec 04 10:56:00 compute-0 wonderful_booth[270057]:     ]
Dec 04 10:56:00 compute-0 wonderful_booth[270057]: }
Dec 04 10:56:00 compute-0 systemd[1]: libpod-d09fdb0f6247d4d3521c18767347e47cab529c11ad9cbb9a9f700c07d33eb289.scope: Deactivated successfully.
Dec 04 10:56:00 compute-0 podman[270041]: 2025-12-04 10:56:00.215466931 +0000 UTC m=+0.455807073 container died d09fdb0f6247d4d3521c18767347e47cab529c11ad9cbb9a9f700c07d33eb289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6ba5dfa8f3991a6567cbcbe8d26f213d7252c1aa22b11a53c3e07e2c6562307-merged.mount: Deactivated successfully.
Dec 04 10:56:00 compute-0 podman[270041]: 2025-12-04 10:56:00.267922515 +0000 UTC m=+0.508262657 container remove d09fdb0f6247d4d3521c18767347e47cab529c11ad9cbb9a9f700c07d33eb289 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:56:00 compute-0 systemd[1]: libpod-conmon-d09fdb0f6247d4d3521c18767347e47cab529c11ad9cbb9a9f700c07d33eb289.scope: Deactivated successfully.
Dec 04 10:56:00 compute-0 sudo[269964]: pam_unix(sudo:session): session closed for user root
Dec 04 10:56:00 compute-0 nova_compute[244644]: 2025-12-04 10:56:00.335 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:56:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:00 compute-0 sudo[270078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:56:00 compute-0 sudo[270078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:56:00 compute-0 sudo[270078]: pam_unix(sudo:session): session closed for user root
Dec 04 10:56:00 compute-0 sudo[270103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:56:00 compute-0 sudo[270103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:56:00 compute-0 podman[270138]: 2025-12-04 10:56:00.72144335 +0000 UTC m=+0.039117059 container create 27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ptolemy, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:56:00 compute-0 systemd[1]: Started libpod-conmon-27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8.scope.
Dec 04 10:56:00 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:56:00 compute-0 podman[270138]: 2025-12-04 10:56:00.70391043 +0000 UTC m=+0.021584159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:56:00 compute-0 podman[270138]: 2025-12-04 10:56:00.809713421 +0000 UTC m=+0.127387130 container init 27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 04 10:56:00 compute-0 podman[270138]: 2025-12-04 10:56:00.818326042 +0000 UTC m=+0.135999751 container start 27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ptolemy, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 04 10:56:00 compute-0 podman[270138]: 2025-12-04 10:56:00.823190131 +0000 UTC m=+0.140863840 container attach 27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 04 10:56:00 compute-0 wizardly_ptolemy[270152]: 167 167
Dec 04 10:56:00 compute-0 systemd[1]: libpod-27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8.scope: Deactivated successfully.
Dec 04 10:56:00 compute-0 conmon[270152]: conmon 27ef88acfa22683bb9e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8.scope/container/memory.events
Dec 04 10:56:00 compute-0 podman[270138]: 2025-12-04 10:56:00.82723679 +0000 UTC m=+0.144910489 container died 27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dffaf344807b20f5a5cdc9cb93b138e390584d95e0e389ffb3b3e424309d294-merged.mount: Deactivated successfully.
Dec 04 10:56:00 compute-0 podman[270138]: 2025-12-04 10:56:00.868375467 +0000 UTC m=+0.186049176 container remove 27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ptolemy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:56:00 compute-0 systemd[1]: libpod-conmon-27ef88acfa22683bb9e07c283ae73d58f33581fd9b2f807e795eb9af877e60b8.scope: Deactivated successfully.
Dec 04 10:56:01 compute-0 podman[270177]: 2025-12-04 10:56:01.034511266 +0000 UTC m=+0.042793170 container create c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:56:01 compute-0 systemd[1]: Started libpod-conmon-c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c.scope.
Dec 04 10:56:01 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb553f1c62181536249ef16e8990739cfcd4183e3e75767c4135be6338c4d7fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb553f1c62181536249ef16e8990739cfcd4183e3e75767c4135be6338c4d7fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb553f1c62181536249ef16e8990739cfcd4183e3e75767c4135be6338c4d7fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb553f1c62181536249ef16e8990739cfcd4183e3e75767c4135be6338c4d7fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:56:01 compute-0 podman[270177]: 2025-12-04 10:56:01.016421923 +0000 UTC m=+0.024703847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:56:01 compute-0 podman[270177]: 2025-12-04 10:56:01.118606555 +0000 UTC m=+0.126888479 container init c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ardinghelli, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 10:56:01 compute-0 podman[270177]: 2025-12-04 10:56:01.126362284 +0000 UTC m=+0.134644188 container start c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ardinghelli, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 04 10:56:01 compute-0 podman[270177]: 2025-12-04 10:56:01.131029549 +0000 UTC m=+0.139311453 container attach c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:56:01 compute-0 nova_compute[244644]: 2025-12-04 10:56:01.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:56:01 compute-0 ceph-mon[75358]: pgmap v1403: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:01 compute-0 lvm[270272]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:56:01 compute-0 lvm[270272]: VG ceph_vg1 finished
Dec 04 10:56:01 compute-0 lvm[270271]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:56:01 compute-0 lvm[270271]: VG ceph_vg0 finished
Dec 04 10:56:01 compute-0 anacron[30888]: Job `cron.monthly' started
Dec 04 10:56:01 compute-0 anacron[30888]: Job `cron.monthly' terminated
Dec 04 10:56:01 compute-0 anacron[30888]: Normal exit (3 jobs run)
Dec 04 10:56:01 compute-0 lvm[270276]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:56:01 compute-0 lvm[270276]: VG ceph_vg2 finished
Dec 04 10:56:01 compute-0 peaceful_ardinghelli[270193]: {}
Dec 04 10:56:02 compute-0 systemd[1]: libpod-c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c.scope: Deactivated successfully.
Dec 04 10:56:02 compute-0 systemd[1]: libpod-c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c.scope: Consumed 1.340s CPU time.
Dec 04 10:56:02 compute-0 podman[270177]: 2025-12-04 10:56:02.002844035 +0000 UTC m=+1.011125949 container died c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb553f1c62181536249ef16e8990739cfcd4183e3e75767c4135be6338c4d7fe-merged.mount: Deactivated successfully.
Dec 04 10:56:02 compute-0 podman[270177]: 2025-12-04 10:56:02.056264293 +0000 UTC m=+1.064546197 container remove c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:56:02 compute-0 systemd[1]: libpod-conmon-c87575943a84b8359828b8d0ffdf2cb0de7f0e22f6e63099a9f1e45878bd7d1c.scope: Deactivated successfully.
Dec 04 10:56:02 compute-0 sudo[270103]: pam_unix(sudo:session): session closed for user root
Dec 04 10:56:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:56:02 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:56:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:56:02 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:56:02 compute-0 sudo[270291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:56:02 compute-0 sudo[270291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:56:02 compute-0 sudo[270291]: pam_unix(sudo:session): session closed for user root
Dec 04 10:56:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:56:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:56:03 compute-0 ceph-mon[75358]: pgmap v1404: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:04 compute-0 podman[270317]: 2025-12-04 10:56:04.000242693 +0000 UTC m=+0.059325164 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 04 10:56:04 compute-0 podman[270316]: 2025-12-04 10:56:04.034195044 +0000 UTC m=+0.092256320 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:56:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:05 compute-0 ceph-mon[75358]: pgmap v1405: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:07 compute-0 ceph-mon[75358]: pgmap v1406: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:09 compute-0 ceph-mon[75358]: pgmap v1407: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:11 compute-0 ceph-mon[75358]: pgmap v1408: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:56:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/976976244' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:56:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:56:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/976976244' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:56:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/976976244' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:56:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/976976244' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:56:13 compute-0 ceph-mon[75358]: pgmap v1409: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:15 compute-0 ceph-mon[75358]: pgmap v1410: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:17 compute-0 ceph-mon[75358]: pgmap v1411: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:19 compute-0 ceph-mon[75358]: pgmap v1412: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:20 compute-0 ceph-mon[75358]: pgmap v1413: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:23 compute-0 ceph-mon[75358]: pgmap v1414: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:23 compute-0 podman[270359]: 2025-12-04 10:56:23.951519004 +0000 UTC m=+0.054582007 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec 04 10:56:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:24 compute-0 ceph-mon[75358]: pgmap v1415: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:56:26
Dec 04 10:56:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:56:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:56:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'backups', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta']
Dec 04 10:56:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:56:27 compute-0 ceph-mon[75358]: pgmap v1416: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:56:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:56:28 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:29 compute-0 ceph-mon[75358]: pgmap v1417: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:56:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:56:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:31 compute-0 sshd[182213]: Timeout before authentication for connection from 101.47.163.20 to 38.102.83.169, pid = 268804
Dec 04 10:56:31 compute-0 ceph-mon[75358]: pgmap v1418: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:33 compute-0 ceph-mon[75358]: pgmap v1419: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:34 compute-0 podman[270382]: 2025-12-04 10:56:34.973754882 +0000 UTC m=+0.075367695 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 04 10:56:35 compute-0 podman[270381]: 2025-12-04 10:56:35.00550907 +0000 UTC m=+0.108893067 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Dec 04 10:56:35 compute-0 ceph-mon[75358]: pgmap v1420: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:56:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:56:37 compute-0 ceph-mon[75358]: pgmap v1421: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:38 compute-0 ceph-mon[75358]: pgmap v1422: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:38 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:41 compute-0 ceph-mon[75358]: pgmap v1423: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:43 compute-0 ceph-mon[75358]: pgmap v1424: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:43 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:45 compute-0 ceph-mon[75358]: pgmap v1425: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:46 compute-0 sshd[182213]: drop connection #0 from [101.47.163.20]:41006 on [38.102.83.169]:22 penalty: exceeded LoginGraceTime
Dec 04 10:56:47 compute-0 ceph-mon[75358]: pgmap v1426: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:49 compute-0 ceph-mon[75358]: pgmap v1427: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:50 compute-0 sshd-session[270425]: Connection reset by authenticating user root 45.135.232.92 port 49060 [preauth]
Dec 04 10:56:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:51 compute-0 ceph-mon[75358]: pgmap v1428: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:51 compute-0 sshd-session[270427]: Connection reset by authenticating user root 45.135.232.92 port 49080 [preauth]
Dec 04 10:56:52 compute-0 nova_compute[244644]: 2025-12-04 10:56:52.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:56:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:53 compute-0 nova_compute[244644]: 2025-12-04 10:56:53.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:56:53 compute-0 nova_compute[244644]: 2025-12-04 10:56:53.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:56:53 compute-0 nova_compute[244644]: 2025-12-04 10:56:53.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:56:53 compute-0 nova_compute[244644]: 2025-12-04 10:56:53.392 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:56:53 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:54 compute-0 ceph-mon[75358]: pgmap v1429: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:54 compute-0 nova_compute[244644]: 2025-12-04 10:56:54.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:56:54 compute-0 sshd-session[270429]: Connection reset by authenticating user root 45.135.232.92 port 49088 [preauth]
Dec 04 10:56:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:56:54.926 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:56:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:56:54.926 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:56:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:56:54.926 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:56:54 compute-0 podman[270435]: 2025-12-04 10:56:54.944939306 +0000 UTC m=+0.050561729 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 04 10:56:55 compute-0 ceph-mon[75358]: pgmap v1430: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:56:55 compute-0 rsyslogd[1007]: imjournal: 15308 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 04 10:56:55 compute-0 nova_compute[244644]: 2025-12-04 10:56:55.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:56:55 compute-0 nova_compute[244644]: 2025-12-04 10:56:55.400 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:56:55 compute-0 nova_compute[244644]: 2025-12-04 10:56:55.401 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:56:55 compute-0 nova_compute[244644]: 2025-12-04 10:56:55.401 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:56:55 compute-0 nova_compute[244644]: 2025-12-04 10:56:55.401 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:56:55 compute-0 nova_compute[244644]: 2025-12-04 10:56:55.402 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:56:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:56:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774168624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:56:55 compute-0 nova_compute[244644]: 2025-12-04 10:56:55.977 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:56:56 compute-0 sshd-session[270433]: Connection reset by authenticating user root 45.135.232.92 port 28202 [preauth]
Dec 04 10:56:56 compute-0 nova_compute[244644]: 2025-12-04 10:56:56.146 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:56:56 compute-0 nova_compute[244644]: 2025-12-04 10:56:56.147 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4951MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:56:56 compute-0 nova_compute[244644]: 2025-12-04 10:56:56.148 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:56:56 compute-0 nova_compute[244644]: 2025-12-04 10:56:56.148 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:56:56 compute-0 nova_compute[244644]: 2025-12-04 10:56:56.308 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:56:56 compute-0 nova_compute[244644]: 2025-12-04 10:56:56.309 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:56:56 compute-0 nova_compute[244644]: 2025-12-04 10:56:56.370 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:56:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Dec 04 10:56:56 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3774168624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:56:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:56:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794570103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:56:57 compute-0 nova_compute[244644]: 2025-12-04 10:56:57.204 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.834s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:56:57 compute-0 nova_compute[244644]: 2025-12-04 10:56:57.210 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:56:57 compute-0 nova_compute[244644]: 2025-12-04 10:56:57.316 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:56:57 compute-0 nova_compute[244644]: 2025-12-04 10:56:57.318 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:56:57 compute-0 nova_compute[244644]: 2025-12-04 10:56:57.318 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:56:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:56:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:56:58 compute-0 ceph-mon[75358]: pgmap v1431: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Dec 04 10:56:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/794570103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:56:58 compute-0 sshd-session[270478]: Connection reset by authenticating user root 45.135.232.92 port 28218 [preauth]
Dec 04 10:56:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Dec 04 10:56:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:56:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:56:58 compute-0 nova_compute[244644]: 2025-12-04 10:56:58.736 244650 DEBUG oslo_concurrency.processutils [None req-d756daf3-793b-420f-81e0-210aaa24d49d a4b8cd4cc3ed49f488aae8af8459583a 340d3ca308e046158ba89c94dd84cdec - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:56:58 compute-0 nova_compute[244644]: 2025-12-04 10:56:58.758 244650 DEBUG oslo_concurrency.processutils [None req-d756daf3-793b-420f-81e0-210aaa24d49d a4b8cd4cc3ed49f488aae8af8459583a 340d3ca308e046158ba89c94dd84cdec - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:56:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.217720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819217811, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1356, "num_deletes": 251, "total_data_size": 2129824, "memory_usage": 2175016, "flush_reason": "Manual Compaction"}
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819230361, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1233771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31100, "largest_seqno": 32455, "table_properties": {"data_size": 1229024, "index_size": 2143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12499, "raw_average_key_size": 20, "raw_value_size": 1218609, "raw_average_value_size": 2014, "num_data_blocks": 98, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845676, "oldest_key_time": 1764845676, "file_creation_time": 1764845819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 12674 microseconds, and 4283 cpu microseconds.
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.230403) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1233771 bytes OK
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.230440) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.234796) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.234835) EVENT_LOG_v1 {"time_micros": 1764845819234828, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.234863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2123801, prev total WAL file size 2154749, number of live WAL files 2.
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.235781) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1204KB)], [65(10MB)]
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819235827, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11818629, "oldest_snapshot_seqno": -1}
Dec 04 10:56:59 compute-0 ceph-mon[75358]: pgmap v1432: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6324 keys, 9328499 bytes, temperature: kUnknown
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819745000, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 9328499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9287309, "index_size": 24248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 158763, "raw_average_key_size": 25, "raw_value_size": 9175140, "raw_average_value_size": 1450, "num_data_blocks": 993, "num_entries": 6324, "num_filter_entries": 6324, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:56:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:56:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.745321) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 9328499 bytes
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.836917) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 23.2 rd, 18.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(17.1) write-amplify(7.6) OK, records in: 6773, records dropped: 449 output_compression: NoCompression
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.836983) EVENT_LOG_v1 {"time_micros": 1764845819836957, "job": 36, "event": "compaction_finished", "compaction_time_micros": 509251, "compaction_time_cpu_micros": 24849, "output_level": 6, "num_output_files": 1, "total_output_size": 9328499, "num_input_records": 6773, "num_output_records": 6324, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819837766, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819840233, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.235697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:56:59 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:57:00 compute-0 nova_compute[244644]: 2025-12-04 10:57:00.319 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:00 compute-0 nova_compute[244644]: 2025-12-04 10:57:00.319 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:00 compute-0 nova_compute[244644]: 2025-12-04 10:57:00.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:00 compute-0 nova_compute[244644]: 2025-12-04 10:57:00.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:57:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Dec 04 10:57:00 compute-0 ceph-mon[75358]: pgmap v1433: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Dec 04 10:57:02 compute-0 sudo[270503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:57:02 compute-0 sudo[270503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:57:02 compute-0 sudo[270503]: pam_unix(sudo:session): session closed for user root
Dec 04 10:57:02 compute-0 nova_compute[244644]: 2025-12-04 10:57:02.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:02 compute-0 nova_compute[244644]: 2025-12-04 10:57:02.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:02 compute-0 sudo[270528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:57:02 compute-0 sudo[270528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:57:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec 04 10:57:02 compute-0 sudo[270528]: pam_unix(sudo:session): session closed for user root
Dec 04 10:57:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:57:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:57:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:57:02 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:57:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:57:03 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:57:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:57:03 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:57:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:57:03 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:57:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:57:03 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:57:03 compute-0 sudo[270585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:57:03 compute-0 sudo[270585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:57:03 compute-0 sudo[270585]: pam_unix(sudo:session): session closed for user root
Dec 04 10:57:03 compute-0 sudo[270610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:57:03 compute-0 sudo[270610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:57:03 compute-0 podman[270645]: 2025-12-04 10:57:03.527042207 +0000 UTC m=+0.045587877 container create 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 04 10:57:03 compute-0 podman[270645]: 2025-12-04 10:57:03.506018363 +0000 UTC m=+0.024563853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:57:03 compute-0 ceph-mon[75358]: pgmap v1434: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec 04 10:57:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:57:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:57:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:57:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:57:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:57:03 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:57:03 compute-0 systemd[1]: Started libpod-conmon-881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3.scope.
Dec 04 10:57:03 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:57:03 compute-0 podman[270645]: 2025-12-04 10:57:03.746773228 +0000 UTC m=+0.265318728 container init 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:57:03 compute-0 podman[270645]: 2025-12-04 10:57:03.756898605 +0000 UTC m=+0.275444085 container start 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 04 10:57:03 compute-0 podman[270645]: 2025-12-04 10:57:03.761255962 +0000 UTC m=+0.279801462 container attach 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 10:57:03 compute-0 exciting_poincare[270661]: 167 167
Dec 04 10:57:03 compute-0 systemd[1]: libpod-881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3.scope: Deactivated successfully.
Dec 04 10:57:03 compute-0 conmon[270661]: conmon 881de0ea28bc0d9c2d70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3.scope/container/memory.events
Dec 04 10:57:03 compute-0 podman[270645]: 2025-12-04 10:57:03.766662224 +0000 UTC m=+0.285207724 container died 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd946ef0108f97312de5920a46ce19684fbd64858e78635c2a4dbb644151753f-merged.mount: Deactivated successfully.
Dec 04 10:57:03 compute-0 podman[270645]: 2025-12-04 10:57:03.809416131 +0000 UTC m=+0.327961611 container remove 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:57:03 compute-0 systemd[1]: libpod-conmon-881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3.scope: Deactivated successfully.
Dec 04 10:57:03 compute-0 podman[270685]: 2025-12-04 10:57:03.96944315 +0000 UTC m=+0.045868615 container create cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:57:04 compute-0 systemd[1]: Started libpod-conmon-cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80.scope.
Dec 04 10:57:04 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:57:04 compute-0 podman[270685]: 2025-12-04 10:57:03.947771409 +0000 UTC m=+0.024196894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:04 compute-0 podman[270685]: 2025-12-04 10:57:04.336512097 +0000 UTC m=+0.412937572 container init cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:57:04 compute-0 podman[270685]: 2025-12-04 10:57:04.350982861 +0000 UTC m=+0.427408366 container start cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 04 10:57:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:57:04 compute-0 podman[270685]: 2025-12-04 10:57:04.544222193 +0000 UTC m=+0.620647688 container attach cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:57:04 compute-0 eloquent_borg[270702]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:57:04 compute-0 eloquent_borg[270702]: --> All data devices are unavailable
Dec 04 10:57:04 compute-0 systemd[1]: libpod-cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80.scope: Deactivated successfully.
Dec 04 10:57:04 compute-0 podman[270685]: 2025-12-04 10:57:04.86470189 +0000 UTC m=+0.941127375 container died cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:57:05 compute-0 ceph-mon[75358]: pgmap v1435: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f-merged.mount: Deactivated successfully.
Dec 04 10:57:05 compute-0 podman[270685]: 2025-12-04 10:57:05.511382444 +0000 UTC m=+1.587807909 container remove cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:57:05 compute-0 systemd[1]: libpod-conmon-cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80.scope: Deactivated successfully.
Dec 04 10:57:05 compute-0 sudo[270610]: pam_unix(sudo:session): session closed for user root
Dec 04 10:57:05 compute-0 podman[270736]: 2025-12-04 10:57:05.587010286 +0000 UTC m=+0.065301570 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 04 10:57:05 compute-0 podman[270735]: 2025-12-04 10:57:05.625500128 +0000 UTC m=+0.103726811 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:57:05 compute-0 sudo[270771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:57:05 compute-0 sudo[270771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:57:05 compute-0 sudo[270771]: pam_unix(sudo:session): session closed for user root
Dec 04 10:57:05 compute-0 sudo[270805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:57:05 compute-0 sudo[270805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:57:05 compute-0 podman[270842]: 2025-12-04 10:57:05.982982592 +0000 UTC m=+0.046103070 container create 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 10:57:06 compute-0 systemd[1]: Started libpod-conmon-28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06.scope.
Dec 04 10:57:06 compute-0 podman[270842]: 2025-12-04 10:57:05.963132746 +0000 UTC m=+0.026253244 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:57:06 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:57:06 compute-0 podman[270842]: 2025-12-04 10:57:06.07647665 +0000 UTC m=+0.139597158 container init 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:57:06 compute-0 podman[270842]: 2025-12-04 10:57:06.08497787 +0000 UTC m=+0.148098328 container start 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:57:06 compute-0 podman[270842]: 2025-12-04 10:57:06.088565987 +0000 UTC m=+0.151686475 container attach 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 04 10:57:06 compute-0 systemd[1]: libpod-28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06.scope: Deactivated successfully.
Dec 04 10:57:06 compute-0 mystifying_khorana[270859]: 167 167
Dec 04 10:57:06 compute-0 conmon[270859]: conmon 28a2a36d26c42618e308 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06.scope/container/memory.events
Dec 04 10:57:06 compute-0 podman[270842]: 2025-12-04 10:57:06.093644222 +0000 UTC m=+0.156764680 container died 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 04 10:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9deb05296c63ddd12e00a8c25ae8b4bfcaad21f18fe6d5812a40c7cbbaae95b3-merged.mount: Deactivated successfully.
Dec 04 10:57:06 compute-0 podman[270842]: 2025-12-04 10:57:06.131504619 +0000 UTC m=+0.194625077 container remove 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 04 10:57:06 compute-0 systemd[1]: libpod-conmon-28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06.scope: Deactivated successfully.
Dec 04 10:57:06 compute-0 podman[270883]: 2025-12-04 10:57:06.304955076 +0000 UTC m=+0.047467604 container create c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Dec 04 10:57:06 compute-0 systemd[1]: Started libpod-conmon-c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981.scope.
Dec 04 10:57:06 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:57:06 compute-0 podman[270883]: 2025-12-04 10:57:06.285010917 +0000 UTC m=+0.027523455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:06 compute-0 podman[270883]: 2025-12-04 10:57:06.404237237 +0000 UTC m=+0.146749775 container init c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 10:57:06 compute-0 podman[270883]: 2025-12-04 10:57:06.414270372 +0000 UTC m=+0.156782890 container start c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:57:06 compute-0 podman[270883]: 2025-12-04 10:57:06.417778118 +0000 UTC m=+0.160290666 container attach c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]: {
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:     "0": [
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:         {
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "devices": [
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "/dev/loop3"
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             ],
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_name": "ceph_lv0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_size": "21470642176",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "name": "ceph_lv0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "tags": {
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.cluster_name": "ceph",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.crush_device_class": "",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.encrypted": "0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.objectstore": "bluestore",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.osd_id": "0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.type": "block",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.vdo": "0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.with_tpm": "0"
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             },
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "type": "block",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "vg_name": "ceph_vg0"
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:         }
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:     ],
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:     "1": [
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:         {
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "devices": [
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "/dev/loop4"
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             ],
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_name": "ceph_lv1",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_size": "21470642176",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "name": "ceph_lv1",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "tags": {
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.cluster_name": "ceph",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.crush_device_class": "",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.encrypted": "0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.objectstore": "bluestore",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.osd_id": "1",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.type": "block",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.vdo": "0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.with_tpm": "0"
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             },
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "type": "block",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "vg_name": "ceph_vg1"
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:         }
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:     ],
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:     "2": [
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:         {
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "devices": [
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "/dev/loop5"
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             ],
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_name": "ceph_lv2",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_size": "21470642176",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "name": "ceph_lv2",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "tags": {
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.cluster_name": "ceph",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.crush_device_class": "",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.encrypted": "0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.objectstore": "bluestore",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.osd_id": "2",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.type": "block",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.vdo": "0",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:                 "ceph.with_tpm": "0"
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             },
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "type": "block",
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:             "vg_name": "ceph_vg2"
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:         }
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]:     ]
Dec 04 10:57:06 compute-0 blissful_hodgkin[270900]: }
Dec 04 10:57:06 compute-0 systemd[1]: libpod-c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981.scope: Deactivated successfully.
Dec 04 10:57:06 compute-0 podman[270883]: 2025-12-04 10:57:06.73028141 +0000 UTC m=+0.472793928 container died c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Dec 04 10:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74-merged.mount: Deactivated successfully.
Dec 04 10:57:06 compute-0 podman[270883]: 2025-12-04 10:57:06.779431624 +0000 UTC m=+0.521944142 container remove c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 04 10:57:06 compute-0 systemd[1]: libpod-conmon-c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981.scope: Deactivated successfully.
Dec 04 10:57:06 compute-0 sudo[270805]: pam_unix(sudo:session): session closed for user root
Dec 04 10:57:06 compute-0 sudo[270921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:57:06 compute-0 sudo[270921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:57:06 compute-0 sudo[270921]: pam_unix(sudo:session): session closed for user root
Dec 04 10:57:06 compute-0 sudo[270946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:57:06 compute-0 sudo[270946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:57:07 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:57:07.105 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 04 10:57:07 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:57:07.108 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 04 10:57:07 compute-0 podman[270982]: 2025-12-04 10:57:07.285905274 +0000 UTC m=+0.050331413 container create 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 04 10:57:07 compute-0 systemd[1]: Started libpod-conmon-24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb.scope.
Dec 04 10:57:07 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:57:07 compute-0 podman[270982]: 2025-12-04 10:57:07.264286326 +0000 UTC m=+0.028712485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:57:07 compute-0 podman[270982]: 2025-12-04 10:57:07.373817487 +0000 UTC m=+0.138243626 container init 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:57:07 compute-0 podman[270982]: 2025-12-04 10:57:07.379501066 +0000 UTC m=+0.143927205 container start 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 04 10:57:07 compute-0 podman[270982]: 2025-12-04 10:57:07.383492324 +0000 UTC m=+0.147918453 container attach 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 04 10:57:07 compute-0 dreamy_hypatia[270999]: 167 167
Dec 04 10:57:07 compute-0 systemd[1]: libpod-24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb.scope: Deactivated successfully.
Dec 04 10:57:07 compute-0 podman[270982]: 2025-12-04 10:57:07.385594665 +0000 UTC m=+0.150020794 container died 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:57:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cab505017658995a384ee88069a1bcda74d0154f017298532d9e319799cdb711-merged.mount: Deactivated successfully.
Dec 04 10:57:07 compute-0 podman[270982]: 2025-12-04 10:57:07.426979259 +0000 UTC m=+0.191405378 container remove 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:57:07 compute-0 systemd[1]: libpod-conmon-24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb.scope: Deactivated successfully.
Dec 04 10:57:07 compute-0 ceph-mon[75358]: pgmap v1436: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 04 10:57:07 compute-0 podman[271023]: 2025-12-04 10:57:07.604059225 +0000 UTC m=+0.058044312 container create 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:57:07 compute-0 systemd[1]: Started libpod-conmon-466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290.scope.
Dec 04 10:57:07 compute-0 podman[271023]: 2025-12-04 10:57:07.571394565 +0000 UTC m=+0.025379732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:57:07 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:57:07 compute-0 podman[271023]: 2025-12-04 10:57:07.71698439 +0000 UTC m=+0.170969487 container init 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 10:57:07 compute-0 podman[271023]: 2025-12-04 10:57:07.730473761 +0000 UTC m=+0.184458838 container start 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:57:07 compute-0 podman[271023]: 2025-12-04 10:57:07.733889984 +0000 UTC m=+0.187875061 container attach 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 04 10:57:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 04 10:57:08 compute-0 lvm[271118]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:57:08 compute-0 lvm[271119]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:57:08 compute-0 lvm[271119]: VG ceph_vg1 finished
Dec 04 10:57:08 compute-0 lvm[271118]: VG ceph_vg0 finished
Dec 04 10:57:08 compute-0 lvm[271121]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:57:08 compute-0 lvm[271121]: VG ceph_vg2 finished
Dec 04 10:57:08 compute-0 hopeful_saha[271040]: {}
Dec 04 10:57:08 compute-0 systemd[1]: libpod-466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290.scope: Deactivated successfully.
Dec 04 10:57:08 compute-0 systemd[1]: libpod-466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290.scope: Consumed 1.529s CPU time.
Dec 04 10:57:08 compute-0 podman[271023]: 2025-12-04 10:57:08.67916978 +0000 UTC m=+1.133154867 container died 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c-merged.mount: Deactivated successfully.
Dec 04 10:57:08 compute-0 podman[271023]: 2025-12-04 10:57:08.731985183 +0000 UTC m=+1.185970260 container remove 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:57:08 compute-0 systemd[1]: libpod-conmon-466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290.scope: Deactivated successfully.
Dec 04 10:57:08 compute-0 sudo[270946]: pam_unix(sudo:session): session closed for user root
Dec 04 10:57:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:57:08 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:57:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:57:08 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:57:08 compute-0 sudo[271137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:57:08 compute-0 sudo[271137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:57:08 compute-0 sudo[271137]: pam_unix(sudo:session): session closed for user root
Dec 04 10:57:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:10 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:57:10.109 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 04 10:57:10 compute-0 ceph-mon[75358]: pgmap v1437: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 04 10:57:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:57:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:57:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 04 10:57:11 compute-0 ceph-mon[75358]: pgmap v1438: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 04 10:57:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:57:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1964643132' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:57:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:57:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1964643132' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:57:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 04 10:57:13 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1964643132' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:57:13 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1964643132' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:57:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Dec 04 10:57:14 compute-0 ceph-mon[75358]: pgmap v1439: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec 04 10:57:15 compute-0 ceph-mon[75358]: pgmap v1440: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Dec 04 10:57:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:17 compute-0 ceph-mon[75358]: pgmap v1441: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:19 compute-0 ceph-mon[75358]: pgmap v1442: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:21 compute-0 ceph-mon[75358]: pgmap v1443: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:23 compute-0 ceph-mon[75358]: pgmap v1444: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:25 compute-0 ceph-mon[75358]: pgmap v1445: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:25 compute-0 podman[271162]: 2025-12-04 10:57:25.896838767 +0000 UTC m=+0.093929871 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:57:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:57:26
Dec 04 10:57:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:57:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:57:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log']
Dec 04 10:57:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:57:27 compute-0 ceph-mon[75358]: pgmap v1446: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:57:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:57:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:29 compute-0 sshd-session[271182]: Invalid user user8 from 45.78.222.160 port 38600
Dec 04 10:57:29 compute-0 ceph-mon[75358]: pgmap v1447: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:29 compute-0 sshd-session[271182]: Received disconnect from 45.78.222.160 port 38600:11: Bye Bye [preauth]
Dec 04 10:57:29 compute-0 sshd-session[271182]: Disconnected from invalid user user8 45.78.222.160 port 38600 [preauth]
Dec 04 10:57:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:57:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:57:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:31 compute-0 ceph-mon[75358]: pgmap v1448: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:33 compute-0 ceph-mon[75358]: pgmap v1449: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:35 compute-0 ceph-mon[75358]: pgmap v1450: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:35 compute-0 podman[271185]: 2025-12-04 10:57:35.958445233 +0000 UTC m=+0.055407358 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:57:35 compute-0 podman[271184]: 2025-12-04 10:57:35.991920101 +0000 UTC m=+0.091032970 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 04 10:57:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:57:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:57:37 compute-0 ceph-mon[75358]: pgmap v1451: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:38 compute-0 ceph-mon[75358]: pgmap v1452: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:41 compute-0 ceph-mon[75358]: pgmap v1453: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:43 compute-0 ceph-mon[75358]: pgmap v1454: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:45 compute-0 ceph-mon[75358]: pgmap v1455: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:48 compute-0 ceph-mon[75358]: pgmap v1456: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:49 compute-0 ceph-mon[75358]: pgmap v1457: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:51 compute-0 ceph-mon[75358]: pgmap v1458: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:52 compute-0 ceph-mon[75358]: pgmap v1459: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:54 compute-0 nova_compute[244644]: 2025-12-04 10:57:54.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:54 compute-0 nova_compute[244644]: 2025-12-04 10:57:54.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:57:54.928 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:57:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:57:54.929 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:57:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:57:54.929 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:57:55 compute-0 nova_compute[244644]: 2025-12-04 10:57:55.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:55 compute-0 nova_compute[244644]: 2025-12-04 10:57:55.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:57:55 compute-0 nova_compute[244644]: 2025-12-04 10:57:55.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:57:55 compute-0 nova_compute[244644]: 2025-12-04 10:57:55.356 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:57:55 compute-0 ceph-mon[75358]: pgmap v1460: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:56 compute-0 nova_compute[244644]: 2025-12-04 10:57:56.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:56 compute-0 nova_compute[244644]: 2025-12-04 10:57:56.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:57:56 compute-0 nova_compute[244644]: 2025-12-04 10:57:56.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:57:56 compute-0 nova_compute[244644]: 2025-12-04 10:57:56.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:57:56 compute-0 nova_compute[244644]: 2025-12-04 10:57:56.371 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:57:56 compute-0 nova_compute[244644]: 2025-12-04 10:57:56.371 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:57:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:57:56 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2095246848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:57:56 compute-0 nova_compute[244644]: 2025-12-04 10:57:56.951 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:57:56 compute-0 podman[271249]: 2025-12-04 10:57:56.951960145 +0000 UTC m=+0.058281218 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.114 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.116 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.116 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.117 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.194 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.195 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.211 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:57:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:57:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132208595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.784 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.790 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.807 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.809 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:57:57 compute-0 nova_compute[244644]: 2025-12-04 10:57:57.809 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:57:57 compute-0 ceph-mon[75358]: pgmap v1461: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:57 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2095246848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:57:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:57:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:57:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:57:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:57:58 compute-0 nova_compute[244644]: 2025-12-04 10:57:58.805 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4132208595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:57:58 compute-0 ceph-mon[75358]: pgmap v1462: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:57:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:57:59 compute-0 nova_compute[244644]: 2025-12-04 10:57:59.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:57:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:57:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:58:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:01 compute-0 nova_compute[244644]: 2025-12-04 10:58:01.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:58:01 compute-0 nova_compute[244644]: 2025-12-04 10:58:01.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:58:01 compute-0 nova_compute[244644]: 2025-12-04 10:58:01.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:58:01 compute-0 ceph-mon[75358]: pgmap v1463: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:03 compute-0 nova_compute[244644]: 2025-12-04 10:58:03.334 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:58:03 compute-0 nova_compute[244644]: 2025-12-04 10:58:03.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:58:03 compute-0 ceph-mon[75358]: pgmap v1464: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:05 compute-0 ceph-mon[75358]: pgmap v1465: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:06 compute-0 podman[271294]: 2025-12-04 10:58:06.944420758 +0000 UTC m=+0.042324098 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 10:58:06 compute-0 podman[271293]: 2025-12-04 10:58:06.970942807 +0000 UTC m=+0.074695580 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 04 10:58:07 compute-0 ceph-mon[75358]: pgmap v1466: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:08 compute-0 sudo[271336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:58:08 compute-0 sudo[271336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:08 compute-0 sudo[271336]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:08 compute-0 sudo[271361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 04 10:58:08 compute-0 sudo[271361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:09 compute-0 sudo[271361]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:58:09 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:58:09 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:09 compute-0 sudo[271405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:58:09 compute-0 sudo[271405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:09 compute-0 sudo[271405]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:09 compute-0 sudo[271430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:58:09 compute-0 sudo[271430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:09 compute-0 ceph-mon[75358]: pgmap v1467: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:09 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:09 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:10 compute-0 sudo[271430]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 04 10:58:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:58:10 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:58:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:58:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:58:10 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:58:10 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:58:10 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:58:10 compute-0 sudo[271486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:58:10 compute-0 sudo[271486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:10 compute-0 sudo[271486]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:10 compute-0 sudo[271511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:58:10 compute-0 sudo[271511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:10 compute-0 podman[271548]: 2025-12-04 10:58:10.497130829 +0000 UTC m=+0.027241439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:58:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:58:10 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:58:10 compute-0 podman[271548]: 2025-12-04 10:58:10.602723254 +0000 UTC m=+0.132833844 container create 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 04 10:58:10 compute-0 systemd[1]: Started libpod-conmon-9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f.scope.
Dec 04 10:58:10 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:58:10 compute-0 podman[271548]: 2025-12-04 10:58:10.814208442 +0000 UTC m=+0.344319052 container init 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:58:10 compute-0 podman[271548]: 2025-12-04 10:58:10.822304351 +0000 UTC m=+0.352414941 container start 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:58:10 compute-0 podman[271548]: 2025-12-04 10:58:10.827501038 +0000 UTC m=+0.357611638 container attach 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 04 10:58:10 compute-0 friendly_cohen[271564]: 167 167
Dec 04 10:58:10 compute-0 systemd[1]: libpod-9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f.scope: Deactivated successfully.
Dec 04 10:58:10 compute-0 podman[271548]: 2025-12-04 10:58:10.82964923 +0000 UTC m=+0.359759820 container died 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d17b1939c06d86a71129cbab6acb03b13380ce2a0367bbe16d4bc17c34f312e6-merged.mount: Deactivated successfully.
Dec 04 10:58:10 compute-0 podman[271548]: 2025-12-04 10:58:10.872046478 +0000 UTC m=+0.402157068 container remove 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:58:10 compute-0 systemd[1]: libpod-conmon-9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f.scope: Deactivated successfully.
Dec 04 10:58:11 compute-0 podman[271589]: 2025-12-04 10:58:11.026034119 +0000 UTC m=+0.034416233 container create 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:58:11 compute-0 systemd[1]: Started libpod-conmon-103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a.scope.
Dec 04 10:58:11 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:11 compute-0 podman[271589]: 2025-12-04 10:58:11.010022507 +0000 UTC m=+0.018404641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:58:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:58:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3160988379' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:58:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:58:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3160988379' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:58:11 compute-0 podman[271589]: 2025-12-04 10:58:11.768777716 +0000 UTC m=+0.777159860 container init 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:58:11 compute-0 podman[271589]: 2025-12-04 10:58:11.777060769 +0000 UTC m=+0.785442883 container start 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:58:12 compute-0 podman[271589]: 2025-12-04 10:58:12.100042867 +0000 UTC m=+1.108425121 container attach 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 10:58:12 compute-0 ceph-mon[75358]: pgmap v1468: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3160988379' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:58:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3160988379' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:58:12 compute-0 wonderful_haslett[271605]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:58:12 compute-0 wonderful_haslett[271605]: --> All data devices are unavailable
Dec 04 10:58:12 compute-0 systemd[1]: libpod-103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a.scope: Deactivated successfully.
Dec 04 10:58:12 compute-0 podman[271589]: 2025-12-04 10:58:12.291453614 +0000 UTC m=+1.299835728 container died 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:58:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031-merged.mount: Deactivated successfully.
Dec 04 10:58:12 compute-0 podman[271589]: 2025-12-04 10:58:12.55147222 +0000 UTC m=+1.559854334 container remove 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:58:12 compute-0 systemd[1]: libpod-conmon-103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a.scope: Deactivated successfully.
Dec 04 10:58:12 compute-0 sudo[271511]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:12 compute-0 sudo[271639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:58:12 compute-0 sudo[271639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:12 compute-0 sudo[271639]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:12 compute-0 sudo[271664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:58:12 compute-0 sudo[271664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:13 compute-0 podman[271701]: 2025-12-04 10:58:13.084397419 +0000 UTC m=+0.095837427 container create 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:58:13 compute-0 podman[271701]: 2025-12-04 10:58:13.010171461 +0000 UTC m=+0.021611479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:58:13 compute-0 systemd[1]: Started libpod-conmon-5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d.scope.
Dec 04 10:58:13 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:58:13 compute-0 ceph-mon[75358]: pgmap v1469: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:13 compute-0 podman[271701]: 2025-12-04 10:58:13.168685074 +0000 UTC m=+0.180125092 container init 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 10:58:13 compute-0 podman[271701]: 2025-12-04 10:58:13.175656594 +0000 UTC m=+0.187096592 container start 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:58:13 compute-0 vibrant_williams[271717]: 167 167
Dec 04 10:58:13 compute-0 podman[271701]: 2025-12-04 10:58:13.179867978 +0000 UTC m=+0.191307996 container attach 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:58:13 compute-0 systemd[1]: libpod-5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d.scope: Deactivated successfully.
Dec 04 10:58:13 compute-0 podman[271701]: 2025-12-04 10:58:13.181114338 +0000 UTC m=+0.192554366 container died 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d0363223f673a5fa286483e893a80b11bb9f0637a2e5feabad742a9cd41e8c7-merged.mount: Deactivated successfully.
Dec 04 10:58:13 compute-0 podman[271701]: 2025-12-04 10:58:13.223315861 +0000 UTC m=+0.234755859 container remove 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 04 10:58:13 compute-0 systemd[1]: libpod-conmon-5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d.scope: Deactivated successfully.
Dec 04 10:58:13 compute-0 podman[271740]: 2025-12-04 10:58:13.376794859 +0000 UTC m=+0.041890757 container create 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:58:13 compute-0 podman[271740]: 2025-12-04 10:58:13.358559942 +0000 UTC m=+0.023655870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:58:13 compute-0 systemd[1]: Started libpod-conmon-0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c.scope.
Dec 04 10:58:13 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:13 compute-0 podman[271740]: 2025-12-04 10:58:13.528953545 +0000 UTC m=+0.194049463 container init 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:58:13 compute-0 podman[271740]: 2025-12-04 10:58:13.537938314 +0000 UTC m=+0.203034212 container start 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 04 10:58:13 compute-0 podman[271740]: 2025-12-04 10:58:13.646512793 +0000 UTC m=+0.311608691 container attach 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Dec 04 10:58:13 compute-0 youthful_haslett[271756]: {
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:     "0": [
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:         {
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "devices": [
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "/dev/loop3"
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             ],
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_name": "ceph_lv0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_size": "21470642176",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "name": "ceph_lv0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "tags": {
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.cluster_name": "ceph",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.crush_device_class": "",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.encrypted": "0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.objectstore": "bluestore",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.osd_id": "0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.type": "block",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.vdo": "0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.with_tpm": "0"
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             },
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "type": "block",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "vg_name": "ceph_vg0"
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:         }
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:     ],
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:     "1": [
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:         {
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "devices": [
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "/dev/loop4"
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             ],
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_name": "ceph_lv1",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_size": "21470642176",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "name": "ceph_lv1",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "tags": {
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.cluster_name": "ceph",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.crush_device_class": "",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.encrypted": "0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.objectstore": "bluestore",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.osd_id": "1",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.type": "block",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.vdo": "0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.with_tpm": "0"
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             },
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "type": "block",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "vg_name": "ceph_vg1"
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:         }
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:     ],
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:     "2": [
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:         {
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "devices": [
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "/dev/loop5"
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             ],
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_name": "ceph_lv2",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_size": "21470642176",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "name": "ceph_lv2",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "tags": {
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.cluster_name": "ceph",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.crush_device_class": "",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.encrypted": "0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.objectstore": "bluestore",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.osd_id": "2",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.type": "block",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.vdo": "0",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:                 "ceph.with_tpm": "0"
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             },
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "type": "block",
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:             "vg_name": "ceph_vg2"
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:         }
Dec 04 10:58:13 compute-0 youthful_haslett[271756]:     ]
Dec 04 10:58:13 compute-0 youthful_haslett[271756]: }
Dec 04 10:58:13 compute-0 systemd[1]: libpod-0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c.scope: Deactivated successfully.
Dec 04 10:58:13 compute-0 podman[271740]: 2025-12-04 10:58:13.85464504 +0000 UTC m=+0.519740948 container died 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 04 10:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956-merged.mount: Deactivated successfully.
Dec 04 10:58:14 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 10:58:14 compute-0 podman[271740]: 2025-12-04 10:58:14.191164609 +0000 UTC m=+0.856260507 container remove 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 04 10:58:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:14 compute-0 systemd[1]: libpod-conmon-0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c.scope: Deactivated successfully.
Dec 04 10:58:14 compute-0 sudo[271664]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:14 compute-0 sudo[271779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:58:14 compute-0 sudo[271779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:14 compute-0 sudo[271779]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:14 compute-0 sudo[271804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:58:14 compute-0 sudo[271804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:14 compute-0 podman[271840]: 2025-12-04 10:58:14.611141083 +0000 UTC m=+0.022526303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:58:14 compute-0 podman[271840]: 2025-12-04 10:58:14.73109106 +0000 UTC m=+0.142476260 container create bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 04 10:58:14 compute-0 systemd[1]: Started libpod-conmon-bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b.scope.
Dec 04 10:58:14 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:58:14 compute-0 podman[271840]: 2025-12-04 10:58:14.814518122 +0000 UTC m=+0.225903342 container init bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:58:14 compute-0 podman[271840]: 2025-12-04 10:58:14.820175921 +0000 UTC m=+0.231561121 container start bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:58:14 compute-0 infallible_jones[271856]: 167 167
Dec 04 10:58:14 compute-0 systemd[1]: libpod-bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b.scope: Deactivated successfully.
Dec 04 10:58:14 compute-0 podman[271840]: 2025-12-04 10:58:14.874405569 +0000 UTC m=+0.285790779 container attach bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:58:14 compute-0 podman[271840]: 2025-12-04 10:58:14.875947056 +0000 UTC m=+0.287332266 container died bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd9240817b6fa1798e21465a39bce941953aa56b49679d7df8b53ac0254c2ffa-merged.mount: Deactivated successfully.
Dec 04 10:58:15 compute-0 podman[271840]: 2025-12-04 10:58:15.08479698 +0000 UTC m=+0.496182180 container remove bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:58:15 compute-0 systemd[1]: libpod-conmon-bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b.scope: Deactivated successfully.
Dec 04 10:58:15 compute-0 podman[271883]: 2025-12-04 10:58:15.245349362 +0000 UTC m=+0.050771524 container create f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:58:15 compute-0 podman[271883]: 2025-12-04 10:58:15.217138491 +0000 UTC m=+0.022560633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:58:15 compute-0 systemd[1]: Started libpod-conmon-f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1.scope.
Dec 04 10:58:15 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:58:15 compute-0 podman[271883]: 2025-12-04 10:58:15.365548115 +0000 UTC m=+0.170970237 container init f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:58:15 compute-0 podman[271883]: 2025-12-04 10:58:15.37229367 +0000 UTC m=+0.177715792 container start f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 04 10:58:15 compute-0 podman[271883]: 2025-12-04 10:58:15.375310614 +0000 UTC m=+0.180732756 container attach f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 10:58:15 compute-0 ceph-mon[75358]: pgmap v1470: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:16 compute-0 lvm[271978]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:58:16 compute-0 lvm[271979]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:58:16 compute-0 lvm[271978]: VG ceph_vg0 finished
Dec 04 10:58:16 compute-0 lvm[271979]: VG ceph_vg1 finished
Dec 04 10:58:16 compute-0 lvm[271981]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:58:16 compute-0 lvm[271981]: VG ceph_vg2 finished
Dec 04 10:58:16 compute-0 priceless_austin[271900]: {}
Dec 04 10:58:16 compute-0 systemd[1]: libpod-f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1.scope: Deactivated successfully.
Dec 04 10:58:16 compute-0 systemd[1]: libpod-f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1.scope: Consumed 1.363s CPU time.
Dec 04 10:58:16 compute-0 podman[271984]: 2025-12-04 10:58:16.227824809 +0000 UTC m=+0.024011470 container died f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 04 10:58:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd-merged.mount: Deactivated successfully.
Dec 04 10:58:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:16 compute-0 podman[271984]: 2025-12-04 10:58:16.462129416 +0000 UTC m=+0.258316077 container remove f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 04 10:58:16 compute-0 systemd[1]: libpod-conmon-f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1.scope: Deactivated successfully.
Dec 04 10:58:16 compute-0 sudo[271804]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:58:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:58:16 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:16 compute-0 sudo[271998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:58:16 compute-0 sudo[271998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:58:16 compute-0 sudo[271998]: pam_unix(sudo:session): session closed for user root
Dec 04 10:58:17 compute-0 ceph-mon[75358]: pgmap v1471: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:58:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:18 compute-0 ceph-mon[75358]: pgmap v1472: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:21 compute-0 ceph-mon[75358]: pgmap v1473: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:23 compute-0 ceph-mon[75358]: pgmap v1474: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.245999) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904246062, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 949, "num_deletes": 251, "total_data_size": 1388432, "memory_usage": 1416080, "flush_reason": "Manual Compaction"}
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec 04 10:58:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904467790, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1353575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32456, "largest_seqno": 33404, "table_properties": {"data_size": 1348795, "index_size": 2368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10318, "raw_average_key_size": 19, "raw_value_size": 1339271, "raw_average_value_size": 2555, "num_data_blocks": 106, "num_entries": 524, "num_filter_entries": 524, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845819, "oldest_key_time": 1764845819, "file_creation_time": 1764845904, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 221860 microseconds, and 4707 cpu microseconds.
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.467856) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1353575 bytes OK
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.467886) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.516395) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.516446) EVENT_LOG_v1 {"time_micros": 1764845904516435, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.516473) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1383842, prev total WAL file size 1383842, number of live WAL files 2.
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.517220) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1321KB)], [68(9109KB)]
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904517278, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10682074, "oldest_snapshot_seqno": -1}
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6334 keys, 8783747 bytes, temperature: kUnknown
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904631583, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8783747, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8742958, "index_size": 23847, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 159654, "raw_average_key_size": 25, "raw_value_size": 8631010, "raw_average_value_size": 1362, "num_data_blocks": 970, "num_entries": 6334, "num_filter_entries": 6334, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845904, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.631809) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8783747 bytes
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.634073) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.4 rd, 76.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(14.4) write-amplify(6.5) OK, records in: 6848, records dropped: 514 output_compression: NoCompression
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.634088) EVENT_LOG_v1 {"time_micros": 1764845904634080, "job": 38, "event": "compaction_finished", "compaction_time_micros": 114368, "compaction_time_cpu_micros": 21709, "output_level": 6, "num_output_files": 1, "total_output_size": 8783747, "num_input_records": 6848, "num_output_records": 6334, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904634390, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904635991, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.517166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:58:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:58:25 compute-0 ceph-mon[75358]: pgmap v1475: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:58:26
Dec 04 10:58:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:58:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:58:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'images', '.rgw.root']
Dec 04 10:58:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:58:27 compute-0 ceph-mon[75358]: pgmap v1476: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:27 compute-0 podman[272023]: 2025-12-04 10:58:27.962957418 +0000 UTC m=+0.064680649 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:58:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:58:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:29 compute-0 ceph-mon[75358]: pgmap v1477: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:58:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:58:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:31 compute-0 ceph-mon[75358]: pgmap v1478: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:33 compute-0 ceph-mon[75358]: pgmap v1479: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:35 compute-0 ceph-mon[75358]: pgmap v1480: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:58:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:58:37 compute-0 ceph-mon[75358]: pgmap v1481: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:37 compute-0 podman[272045]: 2025-12-04 10:58:37.977022902 +0000 UTC m=+0.081523483 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 04 10:58:38 compute-0 podman[272044]: 2025-12-04 10:58:38.026141008 +0000 UTC m=+0.134485324 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 04 10:58:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:39 compute-0 ceph-mon[75358]: pgmap v1482: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:41 compute-0 ceph-mon[75358]: pgmap v1483: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:43 compute-0 ceph-mon[75358]: pgmap v1484: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:45 compute-0 ceph-mon[75358]: pgmap v1485: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:47 compute-0 ceph-mon[75358]: pgmap v1486: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:48 compute-0 ceph-mon[75358]: pgmap v1487: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:51 compute-0 ceph-mon[75358]: pgmap v1488: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:53 compute-0 ceph-mon[75358]: pgmap v1489: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:54 compute-0 nova_compute[244644]: 2025-12-04 10:58:54.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:58:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:54 compute-0 ceph-mon[75358]: pgmap v1490: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:58:54.930 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:58:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:58:54.930 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:58:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:58:54.930 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:58:56 compute-0 nova_compute[244644]: 2025-12-04 10:58:56.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:58:56 compute-0 nova_compute[244644]: 2025-12-04 10:58:56.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:58:56 compute-0 nova_compute[244644]: 2025-12-04 10:58:56.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:58:56 compute-0 nova_compute[244644]: 2025-12-04 10:58:56.355 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:58:56 compute-0 nova_compute[244644]: 2025-12-04 10:58:56.355 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:58:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:57 compute-0 nova_compute[244644]: 2025-12-04 10:58:57.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:58:57 compute-0 nova_compute[244644]: 2025-12-04 10:58:57.369 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:58:57 compute-0 nova_compute[244644]: 2025-12-04 10:58:57.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:58:57 compute-0 nova_compute[244644]: 2025-12-04 10:58:57.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:58:57 compute-0 nova_compute[244644]: 2025-12-04 10:58:57.370 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:58:57 compute-0 nova_compute[244644]: 2025-12-04 10:58:57.370 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:58:57 compute-0 ceph-mon[75358]: pgmap v1491: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:58:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3511190525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:58:57 compute-0 nova_compute[244644]: 2025-12-04 10:58:57.896 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.034 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.035 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4961MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.035 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.036 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:58:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:58:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.277 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.278 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.349 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing inventories for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.424 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating ProviderTree inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.425 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 04 10:58:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.444 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing aggregate associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 04 10:58:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:58:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.463 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing trait associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, traits: COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 04 10:58:58 compute-0 nova_compute[244644]: 2025-12-04 10:58:58.482 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:58:58 compute-0 podman[272131]: 2025-12-04 10:58:58.940209884 +0000 UTC m=+0.052007178 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:58:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3511190525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:58:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 10:58:59 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99470823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:58:59 compute-0 nova_compute[244644]: 2025-12-04 10:58:59.093 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 10:58:59 compute-0 nova_compute[244644]: 2025-12-04 10:58:59.099 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 10:58:59 compute-0 nova_compute[244644]: 2025-12-04 10:58:59.115 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 10:58:59 compute-0 nova_compute[244644]: 2025-12-04 10:58:59.116 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 10:58:59 compute-0 nova_compute[244644]: 2025-12-04 10:58:59.116 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:58:59 compute-0 nova_compute[244644]: 2025-12-04 10:58:59.117 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:58:59 compute-0 nova_compute[244644]: 2025-12-04 10:58:59.117 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 04 10:58:59 compute-0 nova_compute[244644]: 2025-12-04 10:58:59.132 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 04 10:58:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:58:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:58:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:58:59 compute-0 ceph-mon[75358]: pgmap v1492: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:58:59 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/99470823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 10:59:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:00 compute-0 ceph-mon[75358]: pgmap v1493: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:02 compute-0 nova_compute[244644]: 2025-12-04 10:59:02.134 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:02 compute-0 nova_compute[244644]: 2025-12-04 10:59:02.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:03 compute-0 nova_compute[244644]: 2025-12-04 10:59:03.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:03 compute-0 nova_compute[244644]: 2025-12-04 10:59:03.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:03 compute-0 nova_compute[244644]: 2025-12-04 10:59:03.337 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 10:59:03 compute-0 ceph-mon[75358]: pgmap v1494: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:04 compute-0 nova_compute[244644]: 2025-12-04 10:59:04.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:05 compute-0 ceph-mon[75358]: pgmap v1495: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:07 compute-0 ceph-mon[75358]: pgmap v1496: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:08 compute-0 podman[272156]: 2025-12-04 10:59:08.94298045 +0000 UTC m=+0.045495078 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 04 10:59:08 compute-0 podman[272155]: 2025-12-04 10:59:08.99713329 +0000 UTC m=+0.102116098 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true)
Dec 04 10:59:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:09 compute-0 ceph-mon[75358]: pgmap v1497: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:10 compute-0 nova_compute[244644]: 2025-12-04 10:59:10.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 10:59:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3029874939' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:59:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 10:59:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3029874939' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:59:11 compute-0 ceph-mon[75358]: pgmap v1498: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3029874939' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 10:59:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3029874939' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 10:59:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:13 compute-0 nova_compute[244644]: 2025-12-04 10:59:13.351 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:13 compute-0 nova_compute[244644]: 2025-12-04 10:59:13.352 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 04 10:59:13 compute-0 ceph-mon[75358]: pgmap v1499: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:15 compute-0 ceph-mon[75358]: pgmap v1500: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:16 compute-0 sudo[272200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:59:16 compute-0 sudo[272200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:59:16 compute-0 sudo[272200]: pam_unix(sudo:session): session closed for user root
Dec 04 10:59:16 compute-0 sudo[272225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 10:59:16 compute-0 sudo[272225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:59:17 compute-0 sudo[272225]: pam_unix(sudo:session): session closed for user root
Dec 04 10:59:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:59:17 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:59:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 10:59:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:59:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 10:59:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:59:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 10:59:17 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:59:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 10:59:17 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:59:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 10:59:17 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:59:17 compute-0 sudo[272281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:59:17 compute-0 sudo[272281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:59:17 compute-0 sudo[272281]: pam_unix(sudo:session): session closed for user root
Dec 04 10:59:17 compute-0 sudo[272306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 10:59:17 compute-0 sudo[272306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:59:17 compute-0 podman[272343]: 2025-12-04 10:59:17.818625025 +0000 UTC m=+0.049660591 container create ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 10:59:17 compute-0 ceph-mon[75358]: pgmap v1501: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:59:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 10:59:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:59:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 10:59:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 10:59:17 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 10:59:17 compute-0 systemd[1]: Started libpod-conmon-ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23.scope.
Dec 04 10:59:17 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:59:17 compute-0 podman[272343]: 2025-12-04 10:59:17.797327071 +0000 UTC m=+0.028362657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:59:17 compute-0 podman[272343]: 2025-12-04 10:59:17.899010719 +0000 UTC m=+0.130046305 container init ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 04 10:59:17 compute-0 podman[272343]: 2025-12-04 10:59:17.908750898 +0000 UTC m=+0.139786464 container start ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 04 10:59:17 compute-0 podman[272343]: 2025-12-04 10:59:17.913897205 +0000 UTC m=+0.144932801 container attach ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 04 10:59:17 compute-0 ecstatic_visvesvaraya[272359]: 167 167
Dec 04 10:59:17 compute-0 systemd[1]: libpod-ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23.scope: Deactivated successfully.
Dec 04 10:59:17 compute-0 podman[272343]: 2025-12-04 10:59:17.91860537 +0000 UTC m=+0.149640936 container died ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 10:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb548d5bd6bcb2672dc4be056990c8401b41b1fab696133427df491cf9f8408-merged.mount: Deactivated successfully.
Dec 04 10:59:17 compute-0 podman[272343]: 2025-12-04 10:59:17.963313878 +0000 UTC m=+0.194349444 container remove ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:59:17 compute-0 systemd[1]: libpod-conmon-ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23.scope: Deactivated successfully.
Dec 04 10:59:18 compute-0 podman[272382]: 2025-12-04 10:59:18.126801074 +0000 UTC m=+0.038992529 container create 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:59:18 compute-0 systemd[1]: Started libpod-conmon-9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123.scope.
Dec 04 10:59:18 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:18 compute-0 podman[272382]: 2025-12-04 10:59:18.205686551 +0000 UTC m=+0.117878026 container init 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:59:18 compute-0 podman[272382]: 2025-12-04 10:59:18.110843881 +0000 UTC m=+0.023035356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:59:18 compute-0 podman[272382]: 2025-12-04 10:59:18.21462772 +0000 UTC m=+0.126819175 container start 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:59:18 compute-0 podman[272382]: 2025-12-04 10:59:18.218387543 +0000 UTC m=+0.130579008 container attach 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 04 10:59:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:18 compute-0 objective_curran[272399]: --> passed data devices: 0 physical, 3 LVM
Dec 04 10:59:18 compute-0 objective_curran[272399]: --> All data devices are unavailable
Dec 04 10:59:18 compute-0 systemd[1]: libpod-9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123.scope: Deactivated successfully.
Dec 04 10:59:18 compute-0 podman[272382]: 2025-12-04 10:59:18.680214195 +0000 UTC m=+0.592405650 container died 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 04 10:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4-merged.mount: Deactivated successfully.
Dec 04 10:59:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:20 compute-0 ceph-mon[75358]: pgmap v1502: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:20 compute-0 podman[272382]: 2025-12-04 10:59:20.343619448 +0000 UTC m=+2.255810913 container remove 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 04 10:59:20 compute-0 systemd[1]: libpod-conmon-9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123.scope: Deactivated successfully.
Dec 04 10:59:20 compute-0 sudo[272306]: pam_unix(sudo:session): session closed for user root
Dec 04 10:59:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:20 compute-0 sudo[272431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:59:20 compute-0 sudo[272431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:59:20 compute-0 sudo[272431]: pam_unix(sudo:session): session closed for user root
Dec 04 10:59:20 compute-0 sudo[272456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 10:59:20 compute-0 sudo[272456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:59:20 compute-0 podman[272493]: 2025-12-04 10:59:20.815003935 +0000 UTC m=+0.021986221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:59:20 compute-0 podman[272493]: 2025-12-04 10:59:20.93288052 +0000 UTC m=+0.139862786 container create 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:59:20 compute-0 systemd[1]: Started libpod-conmon-6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7.scope.
Dec 04 10:59:20 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:59:21 compute-0 podman[272493]: 2025-12-04 10:59:21.006728363 +0000 UTC m=+0.213710659 container init 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 10:59:21 compute-0 podman[272493]: 2025-12-04 10:59:21.013022118 +0000 UTC m=+0.220004384 container start 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 04 10:59:21 compute-0 podman[272493]: 2025-12-04 10:59:21.016611337 +0000 UTC m=+0.223593693 container attach 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:59:21 compute-0 optimistic_ride[272509]: 167 167
Dec 04 10:59:21 compute-0 systemd[1]: libpod-6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7.scope: Deactivated successfully.
Dec 04 10:59:21 compute-0 podman[272493]: 2025-12-04 10:59:21.019650781 +0000 UTC m=+0.226633067 container died 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea3bfa1e9f8733e48f3acd06c90a2b8245a327f110e12b33e52ecfaad2d70ff3-merged.mount: Deactivated successfully.
Dec 04 10:59:21 compute-0 ceph-mon[75358]: pgmap v1503: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:21 compute-0 podman[272493]: 2025-12-04 10:59:21.058524635 +0000 UTC m=+0.265506901 container remove 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 04 10:59:21 compute-0 systemd[1]: libpod-conmon-6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7.scope: Deactivated successfully.
Dec 04 10:59:21 compute-0 podman[272534]: 2025-12-04 10:59:21.196952815 +0000 UTC m=+0.022537504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:59:21 compute-0 podman[272534]: 2025-12-04 10:59:21.478402467 +0000 UTC m=+0.303987136 container create 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 04 10:59:21 compute-0 systemd[1]: Started libpod-conmon-2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b.scope.
Dec 04 10:59:21 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:21 compute-0 podman[272534]: 2025-12-04 10:59:21.706729085 +0000 UTC m=+0.532313784 container init 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 10:59:21 compute-0 podman[272534]: 2025-12-04 10:59:21.723013595 +0000 UTC m=+0.548598264 container start 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 04 10:59:21 compute-0 podman[272534]: 2025-12-04 10:59:21.908828499 +0000 UTC m=+0.734413198 container attach 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 04 10:59:21 compute-0 great_dijkstra[272551]: {
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:     "0": [
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:         {
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "devices": [
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "/dev/loop3"
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             ],
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_name": "ceph_lv0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_size": "21470642176",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "name": "ceph_lv0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "tags": {
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.cluster_name": "ceph",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.crush_device_class": "",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.encrypted": "0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.objectstore": "bluestore",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.osd_id": "0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.type": "block",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.vdo": "0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.with_tpm": "0"
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             },
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "type": "block",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "vg_name": "ceph_vg0"
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:         }
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:     ],
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:     "1": [
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:         {
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "devices": [
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "/dev/loop4"
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             ],
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_name": "ceph_lv1",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_size": "21470642176",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "name": "ceph_lv1",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "tags": {
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.cluster_name": "ceph",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.crush_device_class": "",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.encrypted": "0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.objectstore": "bluestore",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.osd_id": "1",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.type": "block",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.vdo": "0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.with_tpm": "0"
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             },
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "type": "block",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "vg_name": "ceph_vg1"
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:         }
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:     ],
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:     "2": [
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:         {
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "devices": [
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "/dev/loop5"
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             ],
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_name": "ceph_lv2",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_size": "21470642176",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "name": "ceph_lv2",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "tags": {
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.cluster_name": "ceph",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.crush_device_class": "",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.encrypted": "0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.objectstore": "bluestore",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.osd_id": "2",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.type": "block",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.vdo": "0",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:                 "ceph.with_tpm": "0"
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             },
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "type": "block",
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:             "vg_name": "ceph_vg2"
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:         }
Dec 04 10:59:21 compute-0 great_dijkstra[272551]:     ]
Dec 04 10:59:21 compute-0 great_dijkstra[272551]: }
Dec 04 10:59:22 compute-0 systemd[1]: libpod-2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b.scope: Deactivated successfully.
Dec 04 10:59:22 compute-0 podman[272534]: 2025-12-04 10:59:22.030730073 +0000 UTC m=+0.856314742 container died 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 04 10:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc-merged.mount: Deactivated successfully.
Dec 04 10:59:22 compute-0 podman[272534]: 2025-12-04 10:59:22.078825764 +0000 UTC m=+0.904410433 container remove 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 10:59:22 compute-0 systemd[1]: libpod-conmon-2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b.scope: Deactivated successfully.
Dec 04 10:59:22 compute-0 sudo[272456]: pam_unix(sudo:session): session closed for user root
Dec 04 10:59:22 compute-0 sudo[272573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 10:59:22 compute-0 sudo[272573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:59:22 compute-0 sudo[272573]: pam_unix(sudo:session): session closed for user root
Dec 04 10:59:22 compute-0 sudo[272598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 10:59:22 compute-0 sudo[272598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:59:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:22 compute-0 podman[272634]: 2025-12-04 10:59:22.550621331 +0000 UTC m=+0.042917524 container create d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 10:59:22 compute-0 systemd[1]: Started libpod-conmon-d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03.scope.
Dec 04 10:59:22 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:59:22 compute-0 podman[272634]: 2025-12-04 10:59:22.530889407 +0000 UTC m=+0.023185650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:59:22 compute-0 podman[272634]: 2025-12-04 10:59:22.624761782 +0000 UTC m=+0.117057975 container init d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 04 10:59:22 compute-0 podman[272634]: 2025-12-04 10:59:22.633052036 +0000 UTC m=+0.125348229 container start d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 04 10:59:22 compute-0 podman[272634]: 2025-12-04 10:59:22.637152746 +0000 UTC m=+0.129448959 container attach d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:59:22 compute-0 musing_grothendieck[272650]: 167 167
Dec 04 10:59:22 compute-0 systemd[1]: libpod-d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03.scope: Deactivated successfully.
Dec 04 10:59:22 compute-0 podman[272634]: 2025-12-04 10:59:22.638642693 +0000 UTC m=+0.130938886 container died d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 04 10:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b9bc19fa99b7d819aab9aa86d070e5367747894db9c4cb808dbd38220ef7a75-merged.mount: Deactivated successfully.
Dec 04 10:59:22 compute-0 podman[272634]: 2025-12-04 10:59:22.677742283 +0000 UTC m=+0.170038476 container remove d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:59:22 compute-0 systemd[1]: libpod-conmon-d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03.scope: Deactivated successfully.
Dec 04 10:59:22 compute-0 podman[272674]: 2025-12-04 10:59:22.859919397 +0000 UTC m=+0.049506286 container create a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:59:22 compute-0 systemd[1]: Started libpod-conmon-a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452.scope.
Dec 04 10:59:22 compute-0 systemd[1]: Started libcrun container.
Dec 04 10:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:22 compute-0 podman[272674]: 2025-12-04 10:59:22.836995125 +0000 UTC m=+0.026582074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 10:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 10:59:22 compute-0 podman[272674]: 2025-12-04 10:59:22.943874619 +0000 UTC m=+0.133461528 container init a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 10:59:22 compute-0 podman[272674]: 2025-12-04 10:59:22.951110197 +0000 UTC m=+0.140697086 container start a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 10:59:22 compute-0 podman[272674]: 2025-12-04 10:59:22.9544761 +0000 UTC m=+0.144062999 container attach a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 10:59:23 compute-0 ceph-mon[75358]: pgmap v1504: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:23 compute-0 lvm[272768]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 10:59:23 compute-0 lvm[272769]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 10:59:23 compute-0 lvm[272768]: VG ceph_vg0 finished
Dec 04 10:59:23 compute-0 lvm[272769]: VG ceph_vg1 finished
Dec 04 10:59:23 compute-0 lvm[272771]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 10:59:23 compute-0 lvm[272771]: VG ceph_vg2 finished
Dec 04 10:59:23 compute-0 boring_ramanujan[272690]: {}
Dec 04 10:59:23 compute-0 systemd[1]: libpod-a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452.scope: Deactivated successfully.
Dec 04 10:59:23 compute-0 systemd[1]: libpod-a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452.scope: Consumed 1.397s CPU time.
Dec 04 10:59:23 compute-0 podman[272674]: 2025-12-04 10:59:23.810921614 +0000 UTC m=+1.000508523 container died a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 10:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77-merged.mount: Deactivated successfully.
Dec 04 10:59:23 compute-0 podman[272674]: 2025-12-04 10:59:23.860511262 +0000 UTC m=+1.050098151 container remove a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 10:59:23 compute-0 systemd[1]: libpod-conmon-a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452.scope: Deactivated successfully.
Dec 04 10:59:23 compute-0 sudo[272598]: pam_unix(sudo:session): session closed for user root
Dec 04 10:59:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 10:59:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:59:23 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 10:59:23 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:59:23 compute-0 sudo[272787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 10:59:23 compute-0 sudo[272787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 10:59:23 compute-0 sudo[272787]: pam_unix(sudo:session): session closed for user root
Dec 04 10:59:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.243929) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964244283, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 721, "num_deletes": 257, "total_data_size": 914605, "memory_usage": 928232, "flush_reason": "Manual Compaction"}
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964253386, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 906639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33405, "largest_seqno": 34125, "table_properties": {"data_size": 902814, "index_size": 1605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8285, "raw_average_key_size": 18, "raw_value_size": 895224, "raw_average_value_size": 2039, "num_data_blocks": 72, "num_entries": 439, "num_filter_entries": 439, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845905, "oldest_key_time": 1764845905, "file_creation_time": 1764845964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9489 microseconds, and 3980 cpu microseconds.
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.253439) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 906639 bytes OK
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.253471) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.255700) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.255717) EVENT_LOG_v1 {"time_micros": 1764845964255710, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.255739) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 910861, prev total WAL file size 910861, number of live WAL files 2.
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.256396) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323539' seq:0, type:0; will stop at (end)
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(885KB)], [71(8577KB)]
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964256442, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9690386, "oldest_snapshot_seqno": -1}
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6247 keys, 9444706 bytes, temperature: kUnknown
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964314462, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9444706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9403589, "index_size": 24367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 158761, "raw_average_key_size": 25, "raw_value_size": 9292219, "raw_average_value_size": 1487, "num_data_blocks": 988, "num_entries": 6247, "num_filter_entries": 6247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.314722) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9444706 bytes
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.316300) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.8 rd, 162.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(21.1) write-amplify(10.4) OK, records in: 6773, records dropped: 526 output_compression: NoCompression
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.316316) EVENT_LOG_v1 {"time_micros": 1764845964316307, "job": 40, "event": "compaction_finished", "compaction_time_micros": 58110, "compaction_time_cpu_micros": 25473, "output_level": 6, "num_output_files": 1, "total_output_size": 9444706, "num_input_records": 6773, "num_output_records": 6247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964316553, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964318037, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.256290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:59:24 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 10:59:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:59:24 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 10:59:24 compute-0 ceph-mon[75358]: pgmap v1505: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:59:26
Dec 04 10:59:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 10:59:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 10:59:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', '.rgw.root', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'images']
Dec 04 10:59:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 10:59:27 compute-0 ceph-mon[75358]: pgmap v1506: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:59:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:59:29 compute-0 ceph-mon[75358]: pgmap v1507: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:59:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:59:29 compute-0 podman[272812]: 2025-12-04 10:59:29.970992904 +0000 UTC m=+0.077286239 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 04 10:59:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:31 compute-0 ceph-mon[75358]: pgmap v1508: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:33 compute-0 ceph-mon[75358]: pgmap v1509: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:35 compute-0 sshd-session[272832]: Invalid user posiflex from 107.175.213.239 port 49738
Dec 04 10:59:35 compute-0 sshd-session[272832]: Received disconnect from 107.175.213.239 port 49738:11: Bye Bye [preauth]
Dec 04 10:59:35 compute-0 sshd-session[272832]: Disconnected from invalid user posiflex 107.175.213.239 port 49738 [preauth]
Dec 04 10:59:35 compute-0 ceph-mon[75358]: pgmap v1510: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 10:59:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 10:59:38 compute-0 ceph-mon[75358]: pgmap v1511: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:39 compute-0 ceph-mon[75358]: pgmap v1512: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:39 compute-0 podman[272834]: 2025-12-04 10:59:39.974539818 +0000 UTC m=+0.079859732 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec 04 10:59:39 compute-0 podman[272835]: 2025-12-04 10:59:39.972974169 +0000 UTC m=+0.075663189 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 04 10:59:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:41 compute-0 ceph-mon[75358]: pgmap v1513: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:43 compute-0 ceph-mon[75358]: pgmap v1514: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:45 compute-0 ceph-mon[75358]: pgmap v1515: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:47 compute-0 ceph-mon[75358]: pgmap v1516: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:49 compute-0 ceph-mon[75358]: pgmap v1517: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:50 compute-0 sshd-session[272153]: Received disconnect from 101.47.163.20 port 37818:11: Bye Bye [preauth]
Dec 04 10:59:50 compute-0 sshd-session[272153]: Disconnected from authenticating user root 101.47.163.20 port 37818 [preauth]
Dec 04 10:59:51 compute-0 ceph-mon[75358]: pgmap v1518: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:53 compute-0 ceph-mon[75358]: pgmap v1519: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:54 compute-0 nova_compute[244644]: 2025-12-04 10:59:54.444 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:59:54.931 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:59:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:59:54.931 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:59:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 10:59:54.932 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:59:55 compute-0 ceph-mon[75358]: pgmap v1520: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:57 compute-0 ceph-mon[75358]: pgmap v1521: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:59:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:59:58 compute-0 nova_compute[244644]: 2025-12-04 10:59:58.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:58 compute-0 nova_compute[244644]: 2025-12-04 10:59:58.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 10:59:58 compute-0 nova_compute[244644]: 2025-12-04 10:59:58.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 10:59:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:59:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 10:59:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:58 compute-0 nova_compute[244644]: 2025-12-04 10:59:58.637 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 10:59:58 compute-0 nova_compute[244644]: 2025-12-04 10:59:58.637 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 10:59:59 compute-0 nova_compute[244644]: 2025-12-04 10:59:59.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 10:59:59 compute-0 nova_compute[244644]: 2025-12-04 10:59:59.544 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 10:59:59 compute-0 nova_compute[244644]: 2025-12-04 10:59:59.545 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 10:59:59 compute-0 nova_compute[244644]: 2025-12-04 10:59:59.545 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 10:59:59 compute-0 nova_compute[244644]: 2025-12-04 10:59:59.545 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 10:59:59 compute-0 nova_compute[244644]: 2025-12-04 10:59:59.546 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 10:59:59 compute-0 ceph-mon[75358]: pgmap v1522: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 10:59:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 10:59:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:00:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 11:00:00 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3844193991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:00:00 compute-0 nova_compute[244644]: 2025-12-04 11:00:00.158 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 11:00:00 compute-0 nova_compute[244644]: 2025-12-04 11:00:00.342 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 11:00:00 compute-0 nova_compute[244644]: 2025-12-04 11:00:00.344 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4928MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 11:00:00 compute-0 nova_compute[244644]: 2025-12-04 11:00:00.344 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 11:00:00 compute-0 nova_compute[244644]: 2025-12-04 11:00:00.345 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 11:00:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:00 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3844193991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:00:00 compute-0 nova_compute[244644]: 2025-12-04 11:00:00.896 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 11:00:00 compute-0 nova_compute[244644]: 2025-12-04 11:00:00.896 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 11:00:00 compute-0 nova_compute[244644]: 2025-12-04 11:00:00.921 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 11:00:00 compute-0 podman[272902]: 2025-12-04 11:00:00.956178975 +0000 UTC m=+0.065537811 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 11:00:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 11:00:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/281020915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:00:01 compute-0 nova_compute[244644]: 2025-12-04 11:00:01.517 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 11:00:01 compute-0 nova_compute[244644]: 2025-12-04 11:00:01.523 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 11:00:01 compute-0 nova_compute[244644]: 2025-12-04 11:00:01.645 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 11:00:01 compute-0 nova_compute[244644]: 2025-12-04 11:00:01.647 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 11:00:01 compute-0 nova_compute[244644]: 2025-12-04 11:00:01.647 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 11:00:01 compute-0 ceph-mon[75358]: pgmap v1523: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:01 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/281020915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:00:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:02 compute-0 ceph-mon[75358]: pgmap v1524: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:03 compute-0 nova_compute[244644]: 2025-12-04 11:00:03.642 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:00:03 compute-0 nova_compute[244644]: 2025-12-04 11:00:03.678 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:00:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:04 compute-0 nova_compute[244644]: 2025-12-04 11:00:04.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:00:04 compute-0 nova_compute[244644]: 2025-12-04 11:00:04.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:00:04 compute-0 nova_compute[244644]: 2025-12-04 11:00:04.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:00:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:05 compute-0 nova_compute[244644]: 2025-12-04 11:00:05.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:00:05 compute-0 nova_compute[244644]: 2025-12-04 11:00:05.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 11:00:05 compute-0 ceph-mon[75358]: pgmap v1525: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:07 compute-0 ceph-mon[75358]: pgmap v1526: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:09 compute-0 ceph-mon[75358]: pgmap v1527: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:10 compute-0 podman[272946]: 2025-12-04 11:00:10.941308167 +0000 UTC m=+0.047851635 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 04 11:00:10 compute-0 podman[272945]: 2025-12-04 11:00:10.97231109 +0000 UTC m=+0.080406406 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 04 11:00:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 11:00:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592014196' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 11:00:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 11:00:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592014196' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 11:00:11 compute-0 ceph-mon[75358]: pgmap v1528: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3592014196' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 11:00:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/3592014196' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 11:00:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:13 compute-0 ceph-mon[75358]: pgmap v1529: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:15 compute-0 ceph-mon[75358]: pgmap v1530: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:17 compute-0 ceph-mon[75358]: pgmap v1531: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:19 compute-0 ceph-mon[75358]: pgmap v1532: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:20 compute-0 ceph-mon[75358]: pgmap v1533: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:23 compute-0 ceph-mon[75358]: pgmap v1534: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:24 compute-0 sudo[272988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 11:00:24 compute-0 sudo[272988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:24 compute-0 sudo[272988]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:24 compute-0 sudo[273013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 04 11:00:24 compute-0 sudo[273013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:24 compute-0 podman[273079]: 2025-12-04 11:00:24.580580666 +0000 UTC m=+0.067722885 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 04 11:00:24 compute-0 podman[273079]: 2025-12-04 11:00:24.704445577 +0000 UTC m=+0.191587786 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 04 11:00:25 compute-0 sudo[273013]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 11:00:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:25 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 11:00:25 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:25 compute-0 sudo[273263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 11:00:25 compute-0 sudo[273263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:25 compute-0 sudo[273263]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:25 compute-0 sudo[273288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 11:00:25 compute-0 sudo[273288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:25 compute-0 ceph-mon[75358]: pgmap v1535: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:25 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:26 compute-0 sudo[273288]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 11:00:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:00:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 11:00:26 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 11:00:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 11:00:26 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 11:00:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 11:00:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 11:00:26 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 11:00:26 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 11:00:26 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:00:26 compute-0 sudo[273344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 11:00:26 compute-0 sudo[273344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:26 compute-0 sudo[273344]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:26 compute-0 sudo[273369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 11:00:26 compute-0 sudo[273369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:00:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 11:00:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 11:00:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 11:00:26 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:00:26 compute-0 podman[273406]: 2025-12-04 11:00:26.557188471 +0000 UTC m=+0.039748328 container create 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 11:00:26 compute-0 systemd[1]: Started libpod-conmon-00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5.scope.
Dec 04 11:00:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:00:26 compute-0 podman[273406]: 2025-12-04 11:00:26.540092191 +0000 UTC m=+0.022652068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:00:26 compute-0 podman[273406]: 2025-12-04 11:00:26.652763738 +0000 UTC m=+0.135323615 container init 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 04 11:00:26 compute-0 podman[273406]: 2025-12-04 11:00:26.661667937 +0000 UTC m=+0.144227794 container start 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 04 11:00:26 compute-0 podman[273406]: 2025-12-04 11:00:26.664873675 +0000 UTC m=+0.147433532 container attach 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 11:00:26 compute-0 hungry_bouman[273423]: 167 167
Dec 04 11:00:26 compute-0 systemd[1]: libpod-00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5.scope: Deactivated successfully.
Dec 04 11:00:26 compute-0 podman[273406]: 2025-12-04 11:00:26.669305785 +0000 UTC m=+0.151865692 container died 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 11:00:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d1f60c359dbc4940c5c3c7f8c2b5372049ee72f657ced325ddb3b867f8494ce-merged.mount: Deactivated successfully.
Dec 04 11:00:26 compute-0 podman[273406]: 2025-12-04 11:00:26.713353536 +0000 UTC m=+0.195913393 container remove 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 04 11:00:26 compute-0 systemd[1]: libpod-conmon-00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5.scope: Deactivated successfully.
Dec 04 11:00:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_11:00:26
Dec 04 11:00:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 11:00:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 11:00:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', '.rgw.root', 'default.rgw.log']
Dec 04 11:00:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 11:00:26 compute-0 podman[273447]: 2025-12-04 11:00:26.885469093 +0000 UTC m=+0.052464499 container create 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 11:00:26 compute-0 systemd[1]: Started libpod-conmon-8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414.scope.
Dec 04 11:00:26 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:26 compute-0 podman[273447]: 2025-12-04 11:00:26.857923297 +0000 UTC m=+0.024918793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:00:26 compute-0 podman[273447]: 2025-12-04 11:00:26.957251166 +0000 UTC m=+0.124246592 container init 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 04 11:00:26 compute-0 podman[273447]: 2025-12-04 11:00:26.962777262 +0000 UTC m=+0.129772668 container start 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 04 11:00:26 compute-0 podman[273447]: 2025-12-04 11:00:26.966718299 +0000 UTC m=+0.133713725 container attach 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 04 11:00:27 compute-0 cranky_mendel[273463]: --> passed data devices: 0 physical, 3 LVM
Dec 04 11:00:27 compute-0 cranky_mendel[273463]: --> All data devices are unavailable
Dec 04 11:00:27 compute-0 systemd[1]: libpod-8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414.scope: Deactivated successfully.
Dec 04 11:00:27 compute-0 podman[273447]: 2025-12-04 11:00:27.429581777 +0000 UTC m=+0.596577183 container died 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 04 11:00:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6-merged.mount: Deactivated successfully.
Dec 04 11:00:27 compute-0 podman[273447]: 2025-12-04 11:00:27.474853589 +0000 UTC m=+0.641848995 container remove 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 11:00:27 compute-0 systemd[1]: libpod-conmon-8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414.scope: Deactivated successfully.
Dec 04 11:00:27 compute-0 sudo[273369]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:27 compute-0 ceph-mon[75358]: pgmap v1536: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:27 compute-0 sudo[273494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 11:00:27 compute-0 sudo[273494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:27 compute-0 sudo[273494]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:27 compute-0 sudo[273519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 11:00:27 compute-0 sudo[273519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:27 compute-0 podman[273557]: 2025-12-04 11:00:27.976230602 +0000 UTC m=+0.039691555 container create 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 11:00:28 compute-0 systemd[1]: Started libpod-conmon-4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d.scope.
Dec 04 11:00:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:00:28 compute-0 podman[273557]: 2025-12-04 11:00:27.958056586 +0000 UTC m=+0.021517569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:00:28 compute-0 podman[273557]: 2025-12-04 11:00:28.064216363 +0000 UTC m=+0.127677326 container init 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 11:00:28 compute-0 podman[273557]: 2025-12-04 11:00:28.073521592 +0000 UTC m=+0.136982545 container start 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 11:00:28 compute-0 podman[273557]: 2025-12-04 11:00:28.077238723 +0000 UTC m=+0.140699676 container attach 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 11:00:28 compute-0 nostalgic_lumiere[273574]: 167 167
Dec 04 11:00:28 compute-0 systemd[1]: libpod-4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d.scope: Deactivated successfully.
Dec 04 11:00:28 compute-0 podman[273557]: 2025-12-04 11:00:28.082069452 +0000 UTC m=+0.145530425 container died 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2198143474a0656d5632e801956129500f0c8e9460b95bbc617575067e47dc43-merged.mount: Deactivated successfully.
Dec 04 11:00:28 compute-0 podman[273557]: 2025-12-04 11:00:28.125586881 +0000 UTC m=+0.189047834 container remove 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 04 11:00:28 compute-0 systemd[1]: libpod-conmon-4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d.scope: Deactivated successfully.
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 11:00:28 compute-0 podman[273595]: 2025-12-04 11:00:28.296794235 +0000 UTC m=+0.045850597 container create 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 11:00:28 compute-0 systemd[1]: Started libpod-conmon-9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b.scope.
Dec 04 11:00:28 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:28 compute-0 podman[273595]: 2025-12-04 11:00:28.27703705 +0000 UTC m=+0.026093432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:28 compute-0 podman[273595]: 2025-12-04 11:00:28.383883434 +0000 UTC m=+0.132939816 container init 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 04 11:00:28 compute-0 podman[273595]: 2025-12-04 11:00:28.431925484 +0000 UTC m=+0.180981846 container start 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 11:00:28 compute-0 podman[273595]: 2025-12-04 11:00:28.436056135 +0000 UTC m=+0.185112517 container attach 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:00:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:28 compute-0 brave_edison[273611]: {
Dec 04 11:00:28 compute-0 brave_edison[273611]:     "0": [
Dec 04 11:00:28 compute-0 brave_edison[273611]:         {
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "devices": [
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "/dev/loop3"
Dec 04 11:00:28 compute-0 brave_edison[273611]:             ],
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_name": "ceph_lv0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_size": "21470642176",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "name": "ceph_lv0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "tags": {
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.cluster_name": "ceph",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.crush_device_class": "",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.encrypted": "0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.objectstore": "bluestore",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.osd_id": "0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.type": "block",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.vdo": "0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.with_tpm": "0"
Dec 04 11:00:28 compute-0 brave_edison[273611]:             },
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "type": "block",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "vg_name": "ceph_vg0"
Dec 04 11:00:28 compute-0 brave_edison[273611]:         }
Dec 04 11:00:28 compute-0 brave_edison[273611]:     ],
Dec 04 11:00:28 compute-0 brave_edison[273611]:     "1": [
Dec 04 11:00:28 compute-0 brave_edison[273611]:         {
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "devices": [
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "/dev/loop4"
Dec 04 11:00:28 compute-0 brave_edison[273611]:             ],
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_name": "ceph_lv1",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_size": "21470642176",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "name": "ceph_lv1",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "tags": {
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.cluster_name": "ceph",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.crush_device_class": "",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.encrypted": "0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.objectstore": "bluestore",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.osd_id": "1",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.type": "block",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.vdo": "0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.with_tpm": "0"
Dec 04 11:00:28 compute-0 brave_edison[273611]:             },
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "type": "block",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "vg_name": "ceph_vg1"
Dec 04 11:00:28 compute-0 brave_edison[273611]:         }
Dec 04 11:00:28 compute-0 brave_edison[273611]:     ],
Dec 04 11:00:28 compute-0 brave_edison[273611]:     "2": [
Dec 04 11:00:28 compute-0 brave_edison[273611]:         {
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "devices": [
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "/dev/loop5"
Dec 04 11:00:28 compute-0 brave_edison[273611]:             ],
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_name": "ceph_lv2",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_size": "21470642176",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "name": "ceph_lv2",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "tags": {
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.cluster_name": "ceph",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.crush_device_class": "",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.encrypted": "0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.objectstore": "bluestore",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.osd_id": "2",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.type": "block",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.vdo": "0",
Dec 04 11:00:28 compute-0 brave_edison[273611]:                 "ceph.with_tpm": "0"
Dec 04 11:00:28 compute-0 brave_edison[273611]:             },
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "type": "block",
Dec 04 11:00:28 compute-0 brave_edison[273611]:             "vg_name": "ceph_vg2"
Dec 04 11:00:28 compute-0 brave_edison[273611]:         }
Dec 04 11:00:28 compute-0 brave_edison[273611]:     ]
Dec 04 11:00:28 compute-0 brave_edison[273611]: }
Dec 04 11:00:28 compute-0 systemd[1]: libpod-9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b.scope: Deactivated successfully.
Dec 04 11:00:28 compute-0 podman[273595]: 2025-12-04 11:00:28.728928299 +0000 UTC m=+0.477984681 container died 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 11:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876-merged.mount: Deactivated successfully.
Dec 04 11:00:28 compute-0 podman[273595]: 2025-12-04 11:00:28.938623918 +0000 UTC m=+0.687680280 container remove 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 11:00:28 compute-0 sudo[273519]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:28 compute-0 systemd[1]: libpod-conmon-9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b.scope: Deactivated successfully.
Dec 04 11:00:29 compute-0 sudo[273632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 11:00:29 compute-0 sudo[273632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:29 compute-0 sudo[273632]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:29 compute-0 sudo[273657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 11:00:29 compute-0 sudo[273657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:29 compute-0 podman[273694]: 2025-12-04 11:00:29.442634017 +0000 UTC m=+0.067620632 container create d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 11:00:29 compute-0 systemd[1]: Started libpod-conmon-d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b.scope.
Dec 04 11:00:29 compute-0 podman[273694]: 2025-12-04 11:00:29.404003508 +0000 UTC m=+0.028990163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:00:29 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:00:29 compute-0 podman[273694]: 2025-12-04 11:00:29.521447613 +0000 UTC m=+0.146434248 container init d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 04 11:00:29 compute-0 podman[273694]: 2025-12-04 11:00:29.553662504 +0000 UTC m=+0.178649109 container start d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 04 11:00:29 compute-0 podman[273694]: 2025-12-04 11:00:29.558120513 +0000 UTC m=+0.183107148 container attach d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 11:00:29 compute-0 peaceful_chandrasekhar[273710]: 167 167
Dec 04 11:00:29 compute-0 systemd[1]: libpod-d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b.scope: Deactivated successfully.
Dec 04 11:00:29 compute-0 conmon[273710]: conmon d942c2aa4cc58121f61b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b.scope/container/memory.events
Dec 04 11:00:29 compute-0 podman[273694]: 2025-12-04 11:00:29.561687141 +0000 UTC m=+0.186673756 container died d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 04 11:00:29 compute-0 ceph-mon[75358]: pgmap v1537: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4d2bf931d1ac1c29dad3a1cf907223e1bbff8c76ddf349ef7bcdfe77d3ea2b7-merged.mount: Deactivated successfully.
Dec 04 11:00:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:00:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:00:29 compute-0 podman[273694]: 2025-12-04 11:00:29.868986648 +0000 UTC m=+0.493973263 container remove d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 04 11:00:29 compute-0 systemd[1]: libpod-conmon-d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b.scope: Deactivated successfully.
Dec 04 11:00:30 compute-0 podman[273733]: 2025-12-04 11:00:30.027538202 +0000 UTC m=+0.031747341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:00:30 compute-0 podman[273733]: 2025-12-04 11:00:30.333020304 +0000 UTC m=+0.337229453 container create 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 04 11:00:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:30 compute-0 systemd[1]: Started libpod-conmon-7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481.scope.
Dec 04 11:00:30 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 11:00:30 compute-0 podman[273733]: 2025-12-04 11:00:30.754530537 +0000 UTC m=+0.758739666 container init 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 04 11:00:30 compute-0 podman[273733]: 2025-12-04 11:00:30.769615627 +0000 UTC m=+0.773824736 container start 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 04 11:00:30 compute-0 podman[273733]: 2025-12-04 11:00:30.946921222 +0000 UTC m=+0.951130351 container attach 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 11:00:31 compute-0 lvm[273832]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 11:00:31 compute-0 lvm[273842]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 11:00:31 compute-0 lvm[273832]: VG ceph_vg0 finished
Dec 04 11:00:31 compute-0 lvm[273842]: VG ceph_vg1 finished
Dec 04 11:00:31 compute-0 lvm[273841]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 11:00:31 compute-0 lvm[273841]: VG ceph_vg2 finished
Dec 04 11:00:31 compute-0 podman[273824]: 2025-12-04 11:00:31.571466391 +0000 UTC m=+0.100337106 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 04 11:00:31 compute-0 bold_noether[273749]: {}
Dec 04 11:00:31 compute-0 systemd[1]: libpod-7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481.scope: Deactivated successfully.
Dec 04 11:00:31 compute-0 systemd[1]: libpod-7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481.scope: Consumed 1.421s CPU time.
Dec 04 11:00:31 compute-0 podman[273733]: 2025-12-04 11:00:31.645875068 +0000 UTC m=+1.650084187 container died 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 11:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228-merged.mount: Deactivated successfully.
Dec 04 11:00:31 compute-0 podman[273733]: 2025-12-04 11:00:31.700468499 +0000 UTC m=+1.704677608 container remove 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec 04 11:00:31 compute-0 ceph-mon[75358]: pgmap v1538: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:31 compute-0 systemd[1]: libpod-conmon-7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481.scope: Deactivated successfully.
Dec 04 11:00:31 compute-0 sudo[273657]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 11:00:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:31 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 11:00:31 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:31 compute-0 sudo[273864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 11:00:31 compute-0 sudo[273864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:00:31 compute-0 sudo[273864]: pam_unix(sudo:session): session closed for user root
Dec 04 11:00:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:32 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:00:33 compute-0 ceph-mon[75358]: pgmap v1539: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:34 compute-0 ceph-mon[75358]: pgmap v1540: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:00:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 11:00:37 compute-0 ceph-mon[75358]: pgmap v1541: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:39 compute-0 ceph-mon[75358]: pgmap v1542: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:41 compute-0 ceph-mon[75358]: pgmap v1543: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:41 compute-0 podman[273890]: 2025-12-04 11:00:41.967921037 +0000 UTC m=+0.068614446 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 04 11:00:42 compute-0 podman[273889]: 2025-12-04 11:00:42.000167239 +0000 UTC m=+0.101180186 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 04 11:00:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:43 compute-0 ceph-mon[75358]: pgmap v1544: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:45 compute-0 ceph-mon[75358]: pgmap v1545: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:47 compute-0 ceph-mon[75358]: pgmap v1546: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:49 compute-0 ceph-mon[75358]: pgmap v1547: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:51 compute-0 ceph-mon[75358]: pgmap v1548: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:54 compute-0 ceph-mon[75358]: pgmap v1549: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 11:00:54.932 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 11:00:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 11:00:54.932 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 11:00:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 11:00:54.933 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 11:00:55 compute-0 ceph-mon[75358]: pgmap v1550: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:55 compute-0 nova_compute[244644]: 2025-12-04 11:00:55.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:00:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:57 compute-0 ceph-mon[75358]: pgmap v1551: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:00:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:00:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:00:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:00:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:00:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:00:59 compute-0 nova_compute[244644]: 2025-12-04 11:00:59.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:00:59 compute-0 nova_compute[244644]: 2025-12-04 11:00:59.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 11:00:59 compute-0 nova_compute[244644]: 2025-12-04 11:00:59.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 11:00:59 compute-0 nova_compute[244644]: 2025-12-04 11:00:59.363 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 11:00:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:00:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:00:59 compute-0 ceph-mon[75358]: pgmap v1552: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:00 compute-0 nova_compute[244644]: 2025-12-04 11:01:00.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:01:00 compute-0 nova_compute[244644]: 2025-12-04 11:01:00.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:01:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:00 compute-0 nova_compute[244644]: 2025-12-04 11:01:00.500 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 11:01:00 compute-0 nova_compute[244644]: 2025-12-04 11:01:00.501 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 11:01:00 compute-0 nova_compute[244644]: 2025-12-04 11:01:00.502 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 11:01:00 compute-0 nova_compute[244644]: 2025-12-04 11:01:00.502 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 11:01:00 compute-0 nova_compute[244644]: 2025-12-04 11:01:00.502 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 11:01:00 compute-0 ceph-mon[75358]: pgmap v1553: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 11:01:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856557719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:01:01 compute-0 nova_compute[244644]: 2025-12-04 11:01:01.078 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 11:01:01 compute-0 nova_compute[244644]: 2025-12-04 11:01:01.246 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 11:01:01 compute-0 nova_compute[244644]: 2025-12-04 11:01:01.248 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4932MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 11:01:01 compute-0 nova_compute[244644]: 2025-12-04 11:01:01.249 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 11:01:01 compute-0 nova_compute[244644]: 2025-12-04 11:01:01.249 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 11:01:01 compute-0 CROND[273955]: (root) CMD (run-parts /etc/cron.hourly)
Dec 04 11:01:01 compute-0 run-parts[273958]: (/etc/cron.hourly) starting 0anacron
Dec 04 11:01:01 compute-0 run-parts[273969]: (/etc/cron.hourly) finished 0anacron
Dec 04 11:01:01 compute-0 CROND[273954]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 04 11:01:01 compute-0 nova_compute[244644]: 2025-12-04 11:01:01.940 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 11:01:01 compute-0 nova_compute[244644]: 2025-12-04 11:01:01.940 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 11:01:01 compute-0 nova_compute[244644]: 2025-12-04 11:01:01.958 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 11:01:01 compute-0 podman[273953]: 2025-12-04 11:01:01.99954063 +0000 UTC m=+0.100872248 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 04 11:01:02 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3856557719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:01:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 11:01:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862235050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:01:02 compute-0 nova_compute[244644]: 2025-12-04 11:01:02.537 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 11:01:02 compute-0 nova_compute[244644]: 2025-12-04 11:01:02.543 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 11:01:02 compute-0 nova_compute[244644]: 2025-12-04 11:01:02.579 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 11:01:02 compute-0 nova_compute[244644]: 2025-12-04 11:01:02.581 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 11:01:02 compute-0 nova_compute[244644]: 2025-12-04 11:01:02.582 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 11:01:03 compute-0 ceph-mon[75358]: pgmap v1554: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3862235050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:01:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:05 compute-0 ceph-mon[75358]: pgmap v1555: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:06 compute-0 nova_compute[244644]: 2025-12-04 11:01:06.578 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:01:06 compute-0 nova_compute[244644]: 2025-12-04 11:01:06.578 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:01:06 compute-0 nova_compute[244644]: 2025-12-04 11:01:06.578 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:01:06 compute-0 nova_compute[244644]: 2025-12-04 11:01:06.579 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:01:07 compute-0 nova_compute[244644]: 2025-12-04 11:01:07.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:01:07 compute-0 nova_compute[244644]: 2025-12-04 11:01:07.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 11:01:07 compute-0 ceph-mon[75358]: pgmap v1556: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:09 compute-0 ceph-mon[75358]: pgmap v1557: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 11:01:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002866429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 11:01:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 11:01:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002866429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 11:01:11 compute-0 ceph-mon[75358]: pgmap v1558: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1002866429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 11:01:11 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/1002866429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 11:01:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:12 compute-0 ceph-mon[75358]: pgmap v1559: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:12 compute-0 podman[274009]: 2025-12-04 11:01:12.94507568 +0000 UTC m=+0.050620354 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 04 11:01:12 compute-0 podman[274008]: 2025-12-04 11:01:12.97440608 +0000 UTC m=+0.084025754 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 04 11:01:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:15 compute-0 ceph-mon[75358]: pgmap v1560: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:17 compute-0 ceph-mon[75358]: pgmap v1561: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:19 compute-0 ceph-mon[75358]: pgmap v1562: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:20 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:21 compute-0 ceph-mon[75358]: pgmap v1563: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:22 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:23 compute-0 ceph-mon[75358]: pgmap v1564: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:24 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:24 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:25 compute-0 ceph-mon[75358]: pgmap v1565: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:26 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_11:01:26
Dec 04 11:01:26 compute-0 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 04 11:01:26 compute-0 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec 04 11:01:26 compute-0 ceph-mgr[75651]: [balancer INFO root] pools ['vms', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta']
Dec 04 11:01:26 compute-0 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:01:28 compute-0 ceph-mon[75358]: pgmap v1566: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:28 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:29 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:29 compute-0 ceph-mon[75358]: pgmap v1567: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:01:29 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:01:30 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:30 compute-0 sshd-session[274054]: Invalid user admin1234 from 101.47.163.20 port 43166
Dec 04 11:01:30 compute-0 sshd-session[274054]: Received disconnect from 101.47.163.20 port 43166:11: Bye Bye [preauth]
Dec 04 11:01:30 compute-0 sshd-session[274054]: Disconnected from invalid user admin1234 101.47.163.20 port 43166 [preauth]
Dec 04 11:01:31 compute-0 sudo[274056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 11:01:31 compute-0 sudo[274056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:01:31 compute-0 sudo[274056]: pam_unix(sudo:session): session closed for user root
Dec 04 11:01:31 compute-0 sudo[274081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 04 11:01:31 compute-0 sudo[274081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:01:32 compute-0 sudo[274081]: pam_unix(sudo:session): session closed for user root
Dec 04 11:01:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 11:01:32 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:01:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 04 11:01:32 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 11:01:32 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 04 11:01:32 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:32 compute-0 ceph-mon[75358]: pgmap v1568: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:32 compute-0 podman[274138]: 2025-12-04 11:01:32.948890701 +0000 UTC m=+0.056756474 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 04 11:01:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:01:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 04 11:01:33 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 11:01:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 04 11:01:33 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 11:01:33 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 11:01:33 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:01:33 compute-0 sudo[274158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 11:01:33 compute-0 sudo[274158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:01:33 compute-0 sudo[274158]: pam_unix(sudo:session): session closed for user root
Dec 04 11:01:33 compute-0 sudo[274183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 04 11:01:33 compute-0 sudo[274183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:01:33 compute-0 podman[274218]: 2025-12-04 11:01:33.552367842 +0000 UTC m=+0.042379121 container create 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 04 11:01:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:01:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 04 11:01:33 compute-0 ceph-mon[75358]: pgmap v1569: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:01:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 04 11:01:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 04 11:01:33 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:01:33 compute-0 systemd[1]: Started libpod-conmon-2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7.scope.
Dec 04 11:01:33 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:01:33 compute-0 podman[274218]: 2025-12-04 11:01:33.534433703 +0000 UTC m=+0.024445002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:01:33 compute-0 podman[274218]: 2025-12-04 11:01:33.640279342 +0000 UTC m=+0.130290621 container init 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 11:01:33 compute-0 podman[274218]: 2025-12-04 11:01:33.647216222 +0000 UTC m=+0.137227491 container start 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 11:01:33 compute-0 podman[274218]: 2025-12-04 11:01:33.652136234 +0000 UTC m=+0.142147533 container attach 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 11:01:33 compute-0 epic_benz[274234]: 167 167
Dec 04 11:01:33 compute-0 systemd[1]: libpod-2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7.scope: Deactivated successfully.
Dec 04 11:01:33 compute-0 podman[274218]: 2025-12-04 11:01:33.65364087 +0000 UTC m=+0.143652159 container died 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 04 11:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bca934bbc7694457922d374442947a315e481a35511f6fd2deb186a91006af53-merged.mount: Deactivated successfully.
Dec 04 11:01:33 compute-0 podman[274218]: 2025-12-04 11:01:33.696628615 +0000 UTC m=+0.186639874 container remove 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 04 11:01:33 compute-0 systemd[1]: libpod-conmon-2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7.scope: Deactivated successfully.
Dec 04 11:01:33 compute-0 podman[274258]: 2025-12-04 11:01:33.835848045 +0000 UTC m=+0.022479543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:01:34 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:34 compute-0 podman[274258]: 2025-12-04 11:01:34.393818829 +0000 UTC m=+0.580450297 container create 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 04 11:01:34 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:34 compute-0 systemd[1]: Started libpod-conmon-9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807.scope.
Dec 04 11:01:34 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:34 compute-0 podman[274258]: 2025-12-04 11:01:34.834807039 +0000 UTC m=+1.021438537 container init 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 11:01:34 compute-0 podman[274258]: 2025-12-04 11:01:34.841496893 +0000 UTC m=+1.028128371 container start 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 04 11:01:34 compute-0 podman[274258]: 2025-12-04 11:01:34.850666019 +0000 UTC m=+1.037297497 container attach 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 11:01:35 compute-0 sleepy_clarke[274275]: --> passed data devices: 0 physical, 3 LVM
Dec 04 11:01:35 compute-0 sleepy_clarke[274275]: --> All data devices are unavailable
Dec 04 11:01:35 compute-0 systemd[1]: libpod-9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807.scope: Deactivated successfully.
Dec 04 11:01:35 compute-0 podman[274258]: 2025-12-04 11:01:35.33928386 +0000 UTC m=+1.525915338 container died 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 04 11:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32-merged.mount: Deactivated successfully.
Dec 04 11:01:35 compute-0 podman[274258]: 2025-12-04 11:01:35.75464607 +0000 UTC m=+1.941277548 container remove 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 04 11:01:35 compute-0 ceph-mon[75358]: pgmap v1570: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:35 compute-0 sudo[274183]: pam_unix(sudo:session): session closed for user root
Dec 04 11:01:35 compute-0 systemd[1]: libpod-conmon-9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807.scope: Deactivated successfully.
Dec 04 11:01:35 compute-0 sudo[274307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 11:01:35 compute-0 sudo[274307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:01:35 compute-0 sudo[274307]: pam_unix(sudo:session): session closed for user root
Dec 04 11:01:35 compute-0 sudo[274332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- lvm list --format json
Dec 04 11:01:35 compute-0 sudo[274332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:01:36 compute-0 podman[274368]: 2025-12-04 11:01:36.167540411 +0000 UTC m=+0.023045457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:01:36 compute-0 podman[274368]: 2025-12-04 11:01:36.505241505 +0000 UTC m=+0.360746541 container create 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 11:01:36 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:36 compute-0 systemd[1]: Started libpod-conmon-168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb.scope.
Dec 04 11:01:36 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:01:37 compute-0 podman[274368]: 2025-12-04 11:01:37.174610864 +0000 UTC m=+1.030115910 container init 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 11:01:37 compute-0 podman[274368]: 2025-12-04 11:01:37.181336679 +0000 UTC m=+1.036841695 container start 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 11:01:37 compute-0 vibrant_feistel[274384]: 167 167
Dec 04 11:01:37 compute-0 podman[274368]: 2025-12-04 11:01:37.185706796 +0000 UTC m=+1.041211812 container attach 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 11:01:37 compute-0 systemd[1]: libpod-168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb.scope: Deactivated successfully.
Dec 04 11:01:37 compute-0 podman[274368]: 2025-12-04 11:01:37.186389694 +0000 UTC m=+1.041894710 container died 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 11:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b2e2461f6c03cd9a8d4856abaaa972b5cac8c4e40ce387a31c5db22d9e625bd-merged.mount: Deactivated successfully.
Dec 04 11:01:37 compute-0 ceph-mon[75358]: pgmap v1571: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 04 11:01:37 compute-0 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 04 11:01:38 compute-0 podman[274368]: 2025-12-04 11:01:38.296814145 +0000 UTC m=+2.152319151 container remove 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 04 11:01:38 compute-0 systemd[1]: libpod-conmon-168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb.scope: Deactivated successfully.
Dec 04 11:01:38 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:38 compute-0 podman[274407]: 2025-12-04 11:01:38.555739764 +0000 UTC m=+0.105310297 container create 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 11:01:38 compute-0 podman[274407]: 2025-12-04 11:01:38.483130991 +0000 UTC m=+0.032701554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:01:38 compute-0 systemd[1]: Started libpod-conmon-69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab.scope.
Dec 04 11:01:38 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:39 compute-0 podman[274407]: 2025-12-04 11:01:39.096590818 +0000 UTC m=+0.646161371 container init 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 11:01:39 compute-0 podman[274407]: 2025-12-04 11:01:39.104911802 +0000 UTC m=+0.654482335 container start 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 04 11:01:39 compute-0 podman[274407]: 2025-12-04 11:01:39.11134356 +0000 UTC m=+0.660914123 container attach 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 04 11:01:39 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:39 compute-0 awesome_napier[274424]: {
Dec 04 11:01:39 compute-0 awesome_napier[274424]:     "0": [
Dec 04 11:01:39 compute-0 awesome_napier[274424]:         {
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "devices": [
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "/dev/loop3"
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             ],
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_name": "ceph_lv0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_size": "21470642176",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "name": "ceph_lv0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "tags": {
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.cluster_name": "ceph",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.crush_device_class": "",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.encrypted": "0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.objectstore": "bluestore",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.osd_id": "0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.type": "block",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.vdo": "0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.with_tpm": "0"
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             },
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "type": "block",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "vg_name": "ceph_vg0"
Dec 04 11:01:39 compute-0 awesome_napier[274424]:         }
Dec 04 11:01:39 compute-0 awesome_napier[274424]:     ],
Dec 04 11:01:39 compute-0 awesome_napier[274424]:     "1": [
Dec 04 11:01:39 compute-0 awesome_napier[274424]:         {
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "devices": [
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "/dev/loop4"
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             ],
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_name": "ceph_lv1",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_size": "21470642176",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "name": "ceph_lv1",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "tags": {
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.cluster_name": "ceph",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.crush_device_class": "",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.encrypted": "0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.objectstore": "bluestore",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.osd_id": "1",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.type": "block",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.vdo": "0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.with_tpm": "0"
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             },
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "type": "block",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "vg_name": "ceph_vg1"
Dec 04 11:01:39 compute-0 awesome_napier[274424]:         }
Dec 04 11:01:39 compute-0 awesome_napier[274424]:     ],
Dec 04 11:01:39 compute-0 awesome_napier[274424]:     "2": [
Dec 04 11:01:39 compute-0 awesome_napier[274424]:         {
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "devices": [
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "/dev/loop5"
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             ],
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_name": "ceph_lv2",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_size": "21470642176",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "name": "ceph_lv2",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "tags": {
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.cephx_lockbox_secret": "",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.cluster_name": "ceph",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.crush_device_class": "",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.encrypted": "0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.objectstore": "bluestore",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.osd_id": "2",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.type": "block",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.vdo": "0",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:                 "ceph.with_tpm": "0"
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             },
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "type": "block",
Dec 04 11:01:39 compute-0 awesome_napier[274424]:             "vg_name": "ceph_vg2"
Dec 04 11:01:39 compute-0 awesome_napier[274424]:         }
Dec 04 11:01:39 compute-0 awesome_napier[274424]:     ]
Dec 04 11:01:39 compute-0 awesome_napier[274424]: }
Dec 04 11:01:39 compute-0 systemd[1]: libpod-69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab.scope: Deactivated successfully.
Dec 04 11:01:39 compute-0 podman[274407]: 2025-12-04 11:01:39.450145121 +0000 UTC m=+0.999715654 container died 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 04 11:01:39 compute-0 ceph-mon[75358]: pgmap v1572: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80-merged.mount: Deactivated successfully.
Dec 04 11:01:39 compute-0 podman[274407]: 2025-12-04 11:01:39.791224008 +0000 UTC m=+1.340794541 container remove 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 04 11:01:39 compute-0 systemd[1]: libpod-conmon-69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab.scope: Deactivated successfully.
Dec 04 11:01:39 compute-0 sudo[274332]: pam_unix(sudo:session): session closed for user root
Dec 04 11:01:39 compute-0 sudo[274446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 04 11:01:39 compute-0 sudo[274446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:01:39 compute-0 sudo[274446]: pam_unix(sudo:session): session closed for user root
Dec 04 11:01:39 compute-0 sudo[274471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -- raw list --format json
Dec 04 11:01:39 compute-0 sudo[274471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:01:40 compute-0 podman[274508]: 2025-12-04 11:01:40.257802617 +0000 UTC m=+0.036412045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:01:40 compute-0 podman[274508]: 2025-12-04 11:01:40.429416602 +0000 UTC m=+0.208026000 container create 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 11:01:40 compute-0 systemd[1]: Started libpod-conmon-3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380.scope.
Dec 04 11:01:40 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:01:40 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:40 compute-0 podman[274508]: 2025-12-04 11:01:40.619515461 +0000 UTC m=+0.398124879 container init 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 11:01:40 compute-0 podman[274508]: 2025-12-04 11:01:40.627055945 +0000 UTC m=+0.405665343 container start 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 04 11:01:40 compute-0 podman[274508]: 2025-12-04 11:01:40.631068784 +0000 UTC m=+0.409678182 container attach 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 11:01:40 compute-0 dreamy_thompson[274524]: 167 167
Dec 04 11:01:40 compute-0 systemd[1]: libpod-3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380.scope: Deactivated successfully.
Dec 04 11:01:40 compute-0 podman[274508]: 2025-12-04 11:01:40.633892853 +0000 UTC m=+0.412502251 container died 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 04 11:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-64f4f232a10c8a75b4adae45ba541a76ca20fd23c002ea3cee1806cd9709c88a-merged.mount: Deactivated successfully.
Dec 04 11:01:40 compute-0 podman[274508]: 2025-12-04 11:01:40.674259195 +0000 UTC m=+0.452868593 container remove 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 04 11:01:40 compute-0 systemd[1]: libpod-conmon-3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380.scope: Deactivated successfully.
Dec 04 11:01:40 compute-0 podman[274548]: 2025-12-04 11:01:40.855449604 +0000 UTC m=+0.049531367 container create 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 11:01:40 compute-0 systemd[1]: Started libpod-conmon-763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68.scope.
Dec 04 11:01:40 compute-0 podman[274548]: 2025-12-04 11:01:40.837157555 +0000 UTC m=+0.031239338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 04 11:01:40 compute-0 systemd[1]: Started libcrun container.
Dec 04 11:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 04 11:01:40 compute-0 podman[274548]: 2025-12-04 11:01:40.962396222 +0000 UTC m=+0.156478005 container init 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 04 11:01:40 compute-0 podman[274548]: 2025-12-04 11:01:40.969022354 +0000 UTC m=+0.163104117 container start 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 04 11:01:40 compute-0 podman[274548]: 2025-12-04 11:01:40.973585957 +0000 UTC m=+0.167667760 container attach 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 04 11:01:41 compute-0 lvm[274644]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 11:01:41 compute-0 lvm[274643]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 11:01:41 compute-0 lvm[274644]: VG ceph_vg1 finished
Dec 04 11:01:41 compute-0 lvm[274643]: VG ceph_vg0 finished
Dec 04 11:01:41 compute-0 lvm[274646]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 11:01:41 compute-0 lvm[274646]: VG ceph_vg2 finished
Dec 04 11:01:41 compute-0 ceph-mon[75358]: pgmap v1573: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:41 compute-0 stupefied_hellman[274565]: {}
Dec 04 11:01:41 compute-0 systemd[1]: libpod-763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68.scope: Deactivated successfully.
Dec 04 11:01:41 compute-0 podman[274548]: 2025-12-04 11:01:41.903186647 +0000 UTC m=+1.097268430 container died 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 04 11:01:41 compute-0 systemd[1]: libpod-763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68.scope: Consumed 1.569s CPU time.
Dec 04 11:01:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc-merged.mount: Deactivated successfully.
Dec 04 11:01:41 compute-0 podman[274548]: 2025-12-04 11:01:41.94971166 +0000 UTC m=+1.143793453 container remove 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 04 11:01:41 compute-0 systemd[1]: libpod-conmon-763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68.scope: Deactivated successfully.
Dec 04 11:01:41 compute-0 sudo[274471]: pam_unix(sudo:session): session closed for user root
Dec 04 11:01:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 04 11:01:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:01:42 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 04 11:01:42 compute-0 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:01:42 compute-0 sudo[274659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 04 11:01:42 compute-0 sudo[274659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 04 11:01:42 compute-0 sudo[274659]: pam_unix(sudo:session): session closed for user root
Dec 04 11:01:42 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:43 compute-0 podman[274685]: 2025-12-04 11:01:43.969145946 +0000 UTC m=+0.061802469 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 04 11:01:44 compute-0 podman[274684]: 2025-12-04 11:01:44.00226309 +0000 UTC m=+0.098100570 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 04 11:01:44 compute-0 sshd-session[274727]: Accepted publickey for zuul from 192.168.122.10 port 50442 ssh2: ECDSA SHA256:ltzQ7eyTJCBm6niPvDJ7p04RSqvLZR+VyP9WoVTD4UQ
Dec 04 11:01:44 compute-0 systemd-logind[798]: New session 55 of user zuul.
Dec 04 11:01:44 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec 04 11:01:44 compute-0 sshd-session[274727]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 04 11:01:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:01:44 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec 04 11:01:44 compute-0 ceph-mon[75358]: pgmap v1574: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:44 compute-0 sudo[274731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 04 11:01:44 compute-0 sudo[274731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 04 11:01:44 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:44 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:45 compute-0 ceph-mon[75358]: pgmap v1575: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:46 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:46 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14844 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:47 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14846 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:48 compute-0 ceph-mon[75358]: pgmap v1576: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:48 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:48 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 04 11:01:48 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/53782528' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 04 11:01:49 compute-0 ceph-mon[75358]: from='client.14844 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:49 compute-0 ceph-mon[75358]: from='client.14846 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:49 compute-0 ceph-mon[75358]: pgmap v1577: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:49 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/53782528' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 04 11:01:49 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:50 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:51 compute-0 ceph-mon[75358]: pgmap v1578: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:51 compute-0 ovs-vsctl[275015]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 04 11:01:52 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:52 compute-0 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 04 11:01:52 compute-0 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 04 11:01:52 compute-0 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 04 11:01:53 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: cache status {prefix=cache status} (starting...)
Dec 04 11:01:53 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: client ls {prefix=client ls} (starting...)
Dec 04 11:01:53 compute-0 lvm[275352]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 04 11:01:53 compute-0 lvm[275351]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 04 11:01:53 compute-0 lvm[275352]: VG ceph_vg1 finished
Dec 04 11:01:53 compute-0 lvm[275351]: VG ceph_vg2 finished
Dec 04 11:01:53 compute-0 lvm[275389]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 04 11:01:53 compute-0 lvm[275389]: VG ceph_vg0 finished
Dec 04 11:01:53 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14850 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:53 compute-0 ceph-mon[75358]: pgmap v1579: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:54 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: damage ls {prefix=damage ls} (starting...)
Dec 04 11:01:54 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump loads {prefix=dump loads} (starting...)
Dec 04 11:01:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14852 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:54 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 04 11:01:54 compute-0 sshd-session[274931]: Invalid user adminpldt from 61.72.59.106 port 56079
Dec 04 11:01:54 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:54 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 04 11:01:54 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 04 11:01:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 04 11:01:54 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169780587' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 04 11:01:54 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:54 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14856 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:54 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 04 11:01:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 11:01:54.933 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 11:01:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 11:01:54.935 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 11:01:54 compute-0 ovn_metadata_agent[156090]: 2025-12-04 11:01:54.935 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 11:01:54 compute-0 ceph-mon[75358]: from='client.14850 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:54 compute-0 ceph-mon[75358]: from='client.14852 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:54 compute-0 ceph-mon[75358]: pgmap v1580: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:01:54 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3169780587' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 04 11:01:54 compute-0 ceph-mon[75358]: from='client.14856 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:55 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 04 11:01:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 04 11:01:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2130012547' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:01:55 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 04 11:01:55 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14860 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:55 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T11:01:55.303+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 04 11:01:55 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 04 11:01:55 compute-0 sshd-session[274931]: Connection closed by invalid user adminpldt 61.72.59.106 port 56079 [preauth]
Dec 04 11:01:55 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: ops {prefix=ops} (starting...)
Dec 04 11:01:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 04 11:01:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3450865011' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 04 11:01:55 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 04 11:01:55 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856620364' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 04 11:01:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2130012547' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 04 11:01:55 compute-0 ceph-mon[75358]: from='client.14860 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3450865011' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 04 11:01:55 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3856620364' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 04 11:01:56 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session ls {prefix=session ls} (starting...)
Dec 04 11:01:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 04 11:01:56 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948930561' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 04 11:01:56 compute-0 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: status {prefix=status} (starting...)
Dec 04 11:01:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 04 11:01:56 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3851079445' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 04 11:01:56 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec 04 11:01:56 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14870 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:56 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 04 11:01:56 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1507816033' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 04 11:01:57 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14874 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:57 compute-0 nova_compute[244644]: 2025-12-04 11:01:57.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:01:57 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/948930561' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 04 11:01:57 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3851079445' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 04 11:01:57 compute-0 ceph-mon[75358]: pgmap v1581: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec 04 11:01:57 compute-0 ceph-mon[75358]: from='client.14870 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:57 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1507816033' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 04 11:01:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 04 11:01:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2683046066' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 04 11:01:57 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 04 11:01:57 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497263831' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 04 11:01:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:01:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:01:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 04 11:01:58 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054515866' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 04 11:01:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:01:58 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:01:58 compute-0 ceph-mon[75358]: from='client.14874 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2683046066' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 04 11:01:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1497263831' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 04 11:01:58 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2054515866' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 04 11:01:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 04 11:01:58 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3733613048' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 04 11:01:58 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:01:58 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 04 11:01:58 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120272066' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 04 11:01:58 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14886 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:58 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 04 11:01:58 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T11:01:58.959+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 04 11:01:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 04 11:01:59 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776949453' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 04 11:01:59 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3733613048' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 04 11:01:59 compute-0 ceph-mon[75358]: pgmap v1582: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:01:59 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1120272066' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 04 11:01:59 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/776949453' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 04 11:01:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 04 11:01:59 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506140270' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 04 11:01:59 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:01:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec 04 11:01:59 compute-0 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec 04 11:01:59 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14892 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:35.959996+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:36.960155+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:37.960314+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:38.960464+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:39.960619+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:40.960837+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:41.961064+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:42.961266+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:43.961414+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:44.961548+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:45.961739+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:46.961915+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:47.962092+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:48.962266+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:49.962417+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:50.962546+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:51.962734+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:52.962900+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:53.963049+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:54.963185+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:55.963342+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:56.963496+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:57.963650+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:58.963796+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:59.963935+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:00.964090+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:01.964306+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:02.964444+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:03.964579+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:04.964716+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:05.964916+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:06.965114+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:07.965311+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:08.965470+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:09.965624+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:10.965803+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:11.965991+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:12.966158+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:13.966301+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:14.966438+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:15.966562+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:16.966727+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:17.966889+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:18.967085+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:19.967279+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:20.967430+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:21.967621+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:22.967774+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:23.967949+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:24.968127+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:25.968283+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:26.968428+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:27.968644+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:28.968897+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:29.969086+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:30.969341+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:31.969495+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:32.969640+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:33.969791+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:34.970037+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:35.970185+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:36.970328+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:37.970508+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:38.970711+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:39.970964+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:40.971155+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1572864 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:41.971340+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1556480 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:42.971478+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:43.971639+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:44.973564+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:45.973728+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:46.973879+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:47.974007+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:48.974184+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:49.974313+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:50.974483+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:51.974655+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:52.974789+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:53.974923+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:54.975160+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:55.975279+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:56.975543+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:57.975697+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:58.975849+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:59.975996+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:00.976162+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:01.976446+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:02.976595+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:03.976688+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:04.976873+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:05.977009+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:06.977124+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:07.977216+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:08.977364+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:09.977489+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:10.977632+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:11.977786+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:12.977929+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:13.978117+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:14.978217+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: mgrc ms_handle_reset ms_handle_reset con 0x55c0a3a34000
Dec 04 11:01:59 compute-0 ceph-osd[88205]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 11:01:59 compute-0 ceph-osd[88205]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: get_auth_request con 0x55c0a5b1a800 auth_method 0
Dec 04 11:01:59 compute-0 ceph-osd[88205]: mgrc handle_mgr_configure stats_period=5
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:15.978373+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:16.978516+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:17.978653+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:18.978788+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:19.978927+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:20.979057+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:21.979290+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:22.979443+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:23.979649+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:24.979809+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:25.980018+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:26.980164+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:27.980286+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:28.980445+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:29.980595+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:30.980750+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:31.980947+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:32.981136+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:33.981330+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:34.981450+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:35.981587+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:36.981750+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:37.981893+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:38.982129+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:39.982318+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:40.982541+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:41.982728+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:42.982863+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:43.983011+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:44.983182+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:45.983325+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:46.983491+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:47.983628+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:48.983750+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:49.983891+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:50.984029+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:51.984160+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:52.984297+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:53.985281+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:54.985419+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:55.985545+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:56.985676+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:57.985822+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:58.985986+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:59.986144+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:00.986284+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:01.986486+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:02.986653+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:03.986783+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:04.986968+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:05.987125+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:06.987323+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:07.987464+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:08.987666+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:09.987820+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:10.987972+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:11.988159+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:12.988309+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:13.988458+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:14.988646+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:15.988823+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:16.989016+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:17.989152+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:18.989301+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:19.989459+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:20.989632+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1269760 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:21.989809+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1269760 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:22.989972+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a450f800
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.998474121s of 300.141143799s, submitted: 90
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 1024000 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:23.990166+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:24.990361+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:25.990581+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:26.990756+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:27.990916+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:29.005024+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:30.005292+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:31.005475+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:32.005678+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:33.005888+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:34.006065+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:35.006416+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:36.006680+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:37.006870+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:38.007080+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:39.007267+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:40.007449+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:41.007592+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:42.007808+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:43.007972+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:44.008167+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:45.008387+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:46.008543+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:47.008700+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:48.008866+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:49.009038+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:50.009175+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:51.009335+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:52.009584+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:53.009720+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:54.009995+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:55.010159+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:56.010360+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:57.010497+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:58.010643+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:59.010795+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:00.010955+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:01.011183+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:02.012027+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:03.012319+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:04.012692+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:05.012875+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 917504 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:06.013009+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:07.013184+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:08.013328+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:09.013512+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:10.013676+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:11.013818+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:12.014025+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:13.014259+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:14.014456+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:15.014661+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:16.014816+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:17.014981+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:18.015164+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:19.015337+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:20.015484+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:21.015641+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:22.015849+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:23.015995+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:24.016157+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:25.016357+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:26.016560+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:27.016711+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:28.016877+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:29.017068+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:30.017270+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:31.017414+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:32.017614+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:33.017747+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:34.017926+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:35.018170+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:36.018414+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:37.018628+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:38.018773+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:39.018922+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:40.019283+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:41.019472+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:42.019637+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:43.019785+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:44.019899+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:45.020038+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:46.020150+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 843776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:47.020285+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 843776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:48.020429+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:49.020633+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:50.020764+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:51.020980+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:52.021197+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:53.021346+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:54.021495+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:55.023421+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:56.023619+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:57.023799+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:58.023927+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:59.024074+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:00.024336+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:01.024551+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:02.024791+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:03.024988+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:04.025194+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:05.025386+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:06.025596+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:07.025791+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:08.026054+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:09.026196+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:10.026351+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 794624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:11.026493+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 794624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:12.026690+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:13.026961+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:14.027151+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:15.027305+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:16.027531+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:17.027702+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:18.027871+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:19.028020+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:20.028178+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:21.028318+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:22.028488+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:23.028652+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:24.028811+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:25.028944+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:26.029128+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:27.029284+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:28.029443+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:29.029640+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:30.029795+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:31.029959+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:32.030463+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:33.030641+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:34.030823+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:35.030992+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:36.031269+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 770048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:37.031417+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 770048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:38.031554+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:39.031693+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:40.031851+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:41.031983+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:42.032161+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:43.032305+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:44.032477+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:45.032668+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:46.032845+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:47.033004+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:48.033215+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:49.033363+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:50.033727+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:51.033959+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:52.034192+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:53.034386+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:54.034577+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:55.034767+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:56.034930+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:57.035074+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:58.035235+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:59.035414+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:00.035575+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:01.035730+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:02.035902+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:03.036028+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:04.036167+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:05.036343+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:06.036507+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:07.036634+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:08.036823+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:09.036969+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:10.037153+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:11.037276+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:12.037457+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:13.037612+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:14.037783+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:15.037918+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:16.038138+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:17.038308+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:18.038465+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:19.038620+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:20.038760+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.000054s
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:21.038933+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:22.039234+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:23.039417+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:24.039573+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:25.039738+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:26.039920+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:27.040147+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:28.040298+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:29.040466+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:30.040927+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:31.041076+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:32.041264+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:33.041454+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:34.041644+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:35.041803+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:36.041936+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:37.042207+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:38.042395+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:39.042550+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:40.042721+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:41.042889+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:42.043072+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:01:59 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:01:59 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:43.043252+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:44.043415+0000)
Dec 04 11:01:59 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:01:59 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:45.043579+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:46.043754+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:47.043962+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:48.044292+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:49.044495+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:50.044786+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:51.044988+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:52.045209+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:53.045365+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:54.045522+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:55.045828+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:56.046225+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:57.046703+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:58.046937+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:59.047185+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:00.047328+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:01.047650+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:02.048194+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:03.048564+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:04.048783+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:05.049086+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:06.049356+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:07.049539+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5703 writes, 24K keys, 5703 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5703 writes, 902 syncs, 6.32 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdf4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:08.049757+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:09.050199+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:10.050487+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:11.423750+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:12.423941+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:13.424179+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:14.424319+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:15.424465+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:16.424624+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:17.424796+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:18.424915+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:19.425073+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:20.425275+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:21.425471+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:22.425654+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:23.425793+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:24.425938+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:25.426162+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:26.426318+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:27.426483+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:28.426665+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:29.426839+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:30.427023+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:31.427225+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:32.427405+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:33.427570+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:34.427752+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:35.427882+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:36.428074+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:37.428320+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:38.428464+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:39.428698+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:40.428854+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:41.428996+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:42.429159+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:43.429272+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:44.429399+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:45.429553+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:46.429687+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:47.429819+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:48.430585+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:49.430713+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:50.430854+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:51.431011+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:52.431234+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:53.431412+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:54.431741+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:55.431977+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:56.432162+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 638976 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:57.432365+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 638976 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:58.432526+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:59.432654+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:00.432810+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:01.432947+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:02.433174+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:03.433319+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:04.433459+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:05.433601+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:06.433761+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:07.433904+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:08.434044+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:09.434186+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:10.434340+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:11.434560+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:12.434800+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:13.435020+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:14.435184+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:15.435357+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:16.435516+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:17.435666+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:18.435836+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:19.436042+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:20.436200+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:21.436356+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:22.436535+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.694915771s of 299.933593750s, submitted: 24
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:23.436677+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 573440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:24.436828+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 221184 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:25.437304+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:26.437451+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:27.437583+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:28.437719+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:29.437849+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:30.437983+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:31.438160+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:32.438353+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:33.438486+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:34.438633+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:35.438766+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:36.438936+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:37.439089+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:38.439276+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:39.439397+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:40.439547+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:41.439736+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:42.439918+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:43.440125+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:44.440286+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:45.440520+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:46.440666+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:47.440809+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 204800 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:48.440950+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:49.441124+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:50.441287+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:51.441439+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:52.441815+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:53.441964+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:54.442180+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:55.442330+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:56.442517+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:57.442702+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:58.442854+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:59.443032+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:00.443263+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:01.443435+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:02.443620+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:03.443775+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:04.443921+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:05.444142+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:06.444281+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:07.444400+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:08.444538+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:09.444671+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:10.444828+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:11.444979+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:12.445162+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:13.445366+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:14.445528+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:15.445704+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:16.445860+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:17.446033+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:18.446298+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:19.446515+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:20.446681+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:21.446877+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:22.447089+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:23.447285+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:24.447465+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:25.447635+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:26.447803+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:27.447950+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:28.448091+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:29.448249+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:30.448467+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:31.448652+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:32.448844+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:33.449032+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:34.449289+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:35.449478+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:36.449747+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:37.450036+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:38.450267+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:39.450440+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:40.450634+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:41.450856+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:42.451042+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:43.451259+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:44.451477+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:45.451722+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:46.451932+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:47.452166+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:48.452323+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:49.452791+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:50.452970+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:51.453185+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:52.453379+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:53.453566+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:54.453757+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:55.453947+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:56.454161+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:57.454368+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:58.454532+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:59.454754+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:00.454961+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:01.455387+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:02.456011+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:03.456376+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:04.456643+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:05.457168+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:06.457644+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:07.457960+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:08.458235+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:09.458387+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:10.458564+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:11.458756+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:12.458958+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:13.459134+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:14.459286+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:15.459492+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:16.459828+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:17.460147+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:18.460753+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:19.460990+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a3fee800
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 116.999755859s of 117.139999390s, submitted: 90
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 1040384 heap: 76619776 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:20.461275+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcebe000/0x0/0x4ffc00000, data 0xab840/0x16c000, compress 0x0/0x0/0x0, omap 0x11ab8, meta 0x2bbe548), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 991232 heap: 76619776 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:21.461475+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 120 ms_handle_reset con 0x55c0a3fee800 session 0x55c0a401ec40
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:22.461720+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 9330688 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983916 data_alloc: 218103808 data_used: 3520
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a2e4e400
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x51efe8/0x5e2000, compress 0x0/0x0/0x0, omap 0x11dfd, meta 0x2bbe203), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:23.461906+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 9175040 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:24.462071+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 9134080 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 121 ms_handle_reset con 0x55c0a2e4e400 session 0x55c0a5490380
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:25.462381+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:26.462582+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:27.462839+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x520bc3/0x5e6000, compress 0x0/0x0/0x0, omap 0x11e1f, meta 0x2bbe1e1), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988171 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:28.463200+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:29.463442+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x520bc3/0x5e6000, compress 0x0/0x0/0x0, omap 0x11e1f, meta 0x2bbe1e1), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:30.463619+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:31.463799+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:32.464039+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990705 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:33.464211+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:34.464361+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:35.464511+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:36.464657+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:37.464818+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990705 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:38.464983+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:39.465156+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:40.465312+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:41.465490+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:42.465689+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.107776642s of 22.230192184s, submitted: 58
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a2e4e400
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993025 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:43.465841+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 9158656 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 10
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:44.465990+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 9142272 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:45.466168+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 9199616 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca36000/0x0/0x4ffc00000, data 0x52d5d7/0x5f6000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:46.466330+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 9199616 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca35000/0x0/0x4ffc00000, data 0x52e85e/0x5f7000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:47.466529+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 9027584 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995567 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:48.466693+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 8994816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:49.466865+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 8994816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:50.467072+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 8945664 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x53abca/0x603000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:51.467291+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 8945664 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x53abca/0x603000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:52.467527+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 8765440 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.814142227s of 10.116048813s, submitted: 35
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999813 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:53.467691+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 8650752 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 11
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:54.467863+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 8470528 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:55.468018+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 8470528 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:56.468176+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 8273920 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x558809/0x622000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:57.468436+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 8151040 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001261 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:58.471610+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 6946816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:59.471826+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 6905856 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:00.471997+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 6782976 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:01.472146+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79249408 unmapped: 6684672 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fc9e7000/0x0/0x4ffc00000, data 0x57b223/0x645000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:02.472453+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79249408 unmapped: 6684672 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.159253120s of 10.109436035s, submitted: 78
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006011 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:03.472714+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 6619136 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9e5000/0x0/0x4ffc00000, data 0x57def9/0x647000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:04.472870+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 5390336 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:05.473084+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 5390336 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:06.473332+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 5447680 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:07.473532+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 5447680 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9d2000/0x0/0x4ffc00000, data 0x590256/0x65a000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008761 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:08.473707+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 5210112 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:09.473888+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 5210112 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9c8000/0x0/0x4ffc00000, data 0x59a3b3/0x664000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:10.474031+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 5193728 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:11.474176+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 5193728 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:12.474385+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009539 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:13.474538+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:14.474690+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9bc000/0x0/0x4ffc00000, data 0x5a6634/0x670000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:15.474863+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.569020271s of 12.741366386s, submitted: 41
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:16.475088+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:17.475398+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 5365760 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9b3000/0x0/0x4ffc00000, data 0x5af14c/0x679000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006899 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:18.475684+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 5300224 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:19.475870+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 5292032 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:20.476196+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 5251072 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:21.476365+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:22.476551+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011691 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:23.476732+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc996000/0x0/0x4ffc00000, data 0x5cac75/0x696000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:24.476871+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 5021696 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:25.477063+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.882642746s of 10.000583649s, submitted: 38
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 2809856 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:26.477174+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 1728512 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:27.477366+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1630208 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014373 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:28.477524+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1556480 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:29.477673+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1417216 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fb7ca000/0x0/0x4ffc00000, data 0x5f65bb/0x6c2000, compress 0x0/0x0/0x0, omap 0x11f29, meta 0x3d5e0d7), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:30.477848+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1245184 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:31.478051+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1245184 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:32.478293+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 1056768 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017817 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:33.478463+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 958464 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:34.478630+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 958464 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:35.478761+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.733018875s of 10.001555443s, submitted: 91
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 950272 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb7a8000/0x0/0x4ffc00000, data 0x61635f/0x6e2000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:36.478930+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 917504 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:37.479150+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 917504 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019529 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:38.479362+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 1957888 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb78d000/0x0/0x4ffc00000, data 0x633c4e/0x6ff000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:39.479565+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1949696 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:40.479786+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1949696 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:41.479968+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb77b000/0x0/0x4ffc00000, data 0x64574b/0x711000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 1826816 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb77b000/0x0/0x4ffc00000, data 0x64574b/0x711000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:42.480283+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb778000/0x0/0x4ffc00000, data 0x6480c0/0x714000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1802240 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023333 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:43.480432+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1728512 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:44.480587+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1728512 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:45.480827+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848536491s of 10.001356125s, submitted: 29
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1703936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:46.480966+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1703936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb763000/0x0/0x4ffc00000, data 0x65cfdb/0x729000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:47.481174+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 1867776 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022057 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:48.481370+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 1810432 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:49.481569+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 1810432 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:50.481748+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1744896 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:51.481892+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a5b1a400
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 401408 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:52.482086+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb72d000/0x0/0x4ffc00000, data 0x68e752/0x75f000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 999424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:53.482328+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041309 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 12
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 983040 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:54.482487+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 1196032 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:55.482661+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.589168549s of 10.002529144s, submitted: 57
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a450e000
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb703000/0x0/0x4ffc00000, data 0x6ba903/0x789000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 1105920 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:56.482846+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 1073152 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:57.483061+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 1015808 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:58.483326+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038997 data_alloc: 218103808 data_used: 4260
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb6da000/0x0/0x4ffc00000, data 0x6e2609/0x7b2000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 1318912 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:59.483575+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 1318912 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb6bc000/0x0/0x4ffc00000, data 0x6ffebe/0x7d0000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:00.483799+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 1179648 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:01.483964+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 2170880 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:02.484331+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 884736 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fb669000/0x0/0x4ffc00000, data 0x74b4c0/0x81f000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:03.484511+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059855 data_alloc: 218103808 data_used: 4260
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 679936 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:04.484751+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 663552 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:05.484995+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.600092888s of 10.000102997s, submitted: 172
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 1277952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:06.485161+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90284032 unmapped: 892928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:07.485327+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89513984 unmapped: 1662976 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:08.485546+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063631 data_alloc: 218103808 data_used: 4105
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1630208 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb622000/0x0/0x4ffc00000, data 0x7907b7/0x866000, compress 0x0/0x0/0x0, omap 0x12520, meta 0x3d5dae0), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:09.485706+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90038272 unmapped: 1138688 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:10.485834+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fb5e9000/0x0/0x4ffc00000, data 0x7ca62d/0x8a1000, compress 0x0/0x0/0x0, omap 0x12680, meta 0x3d5d980), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90152960 unmapped: 2072576 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:11.485976+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90251264 unmapped: 1974272 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:12.486195+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 1753088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:13.486361+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075351 data_alloc: 218103808 data_used: 4755
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 1753088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:14.486575+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90054656 unmapped: 2170880 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:15.486738+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.681773186s of 10.032649994s, submitted: 160
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91299840 unmapped: 925696 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:16.486889+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b5000/0x0/0x4ffc00000, data 0x7fe7a5/0x8d7000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:17.487032+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x802d67/0x8db000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 860160 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:18.487187+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076485 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:19.487370+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:20.487533+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x803215/0x8db000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:21.487716+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:22.487961+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:23.488225+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x8109fa/0x8e9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075757 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x8109fa/0x8e9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:24.488387+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 491520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:25.488536+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.903874397s of 10.266777992s, submitted: 19
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 483328 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:26.488673+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 442368 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:27.488822+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91930624 unmapped: 294912 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb580000/0x0/0x4ffc00000, data 0x833caa/0x90c000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:28.489026+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078121 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:29.489222+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:30.489357+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91504640 unmapped: 1769472 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:31.489539+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 1736704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:32.489779+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 1736704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:33.489960+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079017 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 1728512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:34.490191+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 1728512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:35.490350+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb546000/0x0/0x4ffc00000, data 0x86d830/0x946000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:36.490591+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:37.490736+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:38.490945+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080545 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:39.491168+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb546000/0x0/0x4ffc00000, data 0x86d830/0x946000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:40.491333+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:41.491517+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:42.491717+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.004943848s of 16.979648590s, submitted: 22
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 2056192 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:43.491852+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb539000/0x0/0x4ffc00000, data 0x87b015/0x953000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081409 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91455488 unmapped: 1818624 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:44.492028+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91455488 unmapped: 1818624 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:45.492183+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:46.492317+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:47.492483+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:48.492635+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082297 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:49.492763+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb510000/0x0/0x4ffc00000, data 0x8a3c6f/0x97c000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:50.492922+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92340224 unmapped: 933888 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4f3000/0x0/0x4ffc00000, data 0x8c1190/0x999000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:51.493070+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:52.493266+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:53.493481+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4d8000/0x0/0x4ffc00000, data 0x8db7f0/0x9b4000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084117 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.924418449s of 11.061837196s, submitted: 25
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:54.493634+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:55.493777+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92512256 unmapped: 1810432 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:56.493930+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92512256 unmapped: 1810432 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:57.494073+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 ms_handle_reset con 0x55c0a5b1a400 session 0x55c0a5b048c0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 2277376 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:58.494307+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 ms_handle_reset con 0x55c0a450e000 session 0x55c0a5f96700
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085581 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4c4000/0x0/0x4ffc00000, data 0x8ee753/0x9c8000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 2277376 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 13
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:59.494455+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4c4000/0x0/0x4ffc00000, data 0x8ee8b9/0x9c8000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:00.494685+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:01.494927+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:02.495166+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb492000/0x0/0x4ffc00000, data 0x9215f8/0x9fa000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb493000/0x0/0x4ffc00000, data 0x92155d/0x9f9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:03.495505+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090467 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.560784340s of 10.244839668s, submitted: 209
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 2424832 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:04.495689+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 2424832 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:05.495886+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 2392064 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:06.496067+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:07.496213+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:08.496473+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089339 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:09.496711+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:10.496920+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:11.497088+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:12.497285+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:13.497495+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087675 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:14.497742+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:15.497912+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:16.498055+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:17.498256+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:18.498471+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.419944763s of 14.577485085s, submitted: 11
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087819 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:19.498615+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:20.498771+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:21.498968+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:22.499291+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:23.499433+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089351 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a701/0xa13000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:24.499588+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:25.499815+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:26.499993+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:27.500199+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:28.500385+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089207 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.268507957s of 10.285860062s, submitted: 5
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a701/0xa13000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:29.500599+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:30.500773+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:31.500969+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:32.501235+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:33.501399+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088649 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:34.501543+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:35.501694+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:36.501882+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:37.502054+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:38.502267+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090309 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.970705986s of 10.010634422s, submitted: 8
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:39.502417+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:40.502590+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:41.502904+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb475000/0x0/0x4ffc00000, data 0x93c26b/0xa15000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:42.503092+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb475000/0x0/0x4ffc00000, data 0x93c26b/0xa15000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:43.503285+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092127 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 2269184 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:44.503471+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 2269184 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:45.503675+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:46.503890+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:47.504093+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:48.504274+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094885 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:49.504433+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:50.504639+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.480938911s of 11.537956238s, submitted: 43
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:51.504846+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:52.505084+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:53.505310+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095029 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:54.505517+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:55.505679+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:56.505845+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93de27/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:57.506029+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93de27/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:58.506205+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097421 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:59.506331+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:00.506478+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.449153900s of 10.473722458s, submitted: 15
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:01.506639+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:02.506841+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:03.506976+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096815 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:04.507188+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:05.507319+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:06.507540+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:07.507724+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddf9/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:08.508005+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddf9/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098363 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93216768 unmapped: 2154496 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:09.508279+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93216768 unmapped: 2154496 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:10.508484+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.951936722s of 10.007729530s, submitted: 8
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 2146304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:11.508701+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93ddb5/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 2146304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:12.508907+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:13.509087+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:14.509304+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:15.509549+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:16.509735+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:17.509878+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:18.510073+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:19.510238+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:20.510392+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:21.510558+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.995441437s of 11.008138657s, submitted: 5
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:22.510821+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb474000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:23.510979+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb474000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:24.511158+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:25.511354+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:26.511553+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:27.511710+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:28.511847+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097645 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:29.512013+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:30.512225+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:31.512379+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:32.512612+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:33.512808+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097789 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:34.513000+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.616669655s of 12.638894081s, submitted: 13
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:35.513199+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:36.513415+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:37.513628+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:38.513858+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 2351104 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097805 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:39.514040+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 2318336 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:40.514202+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 2318336 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:41.514330+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:42.514524+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:43.514667+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097645 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:44.514845+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:45.514997+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.879154205s of 10.907876015s, submitted: 15
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:46.515197+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 2236416 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:47.515325+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:48.515473+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099337 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:49.516217+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:50.516368+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93dde1/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:51.516517+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:52.516696+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:53.516837+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddb5/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099177 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:54.517009+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:55.517179+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:56.517315+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:57.517456+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.439765930s of 12.481030464s, submitted: 20
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:58.517554+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098603 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:59.517687+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:00.517802+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:01.517939+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:02.518158+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:03.518336+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103199 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:04.518527+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 2179072 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fb46e000/0x0/0x4ffc00000, data 0x93f98a/0xa1c000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:05.518711+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 2179072 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:06.518913+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 1122304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:07.519032+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:08.519297+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104299 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:09.519538+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fb46f000/0x0/0x4ffc00000, data 0x93fa25/0xa1d000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.015718460s of 12.077057838s, submitted: 32
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:10.519863+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:11.519993+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:12.520185+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:13.520318+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109053 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:14.520454+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb469000/0x0/0x4ffc00000, data 0x94153f/0xa21000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:15.520596+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:16.520744+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:17.520893+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:18.521048+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109881 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:19.521197+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46a000/0x0/0x4ffc00000, data 0x9415da/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:20.521334+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.393504143s of 10.408122063s, submitted: 19
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:21.521508+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb469000/0x0/0x4ffc00000, data 0x941608/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:22.521849+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:23.522164+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112547 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:24.522555+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb468000/0x0/0x4ffc00000, data 0x9416a3/0xa23000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:25.522825+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:26.522980+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:27.523181+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:28.523436+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111653 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:29.523630+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46a000/0x0/0x4ffc00000, data 0x941608/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:30.523841+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:31.524045+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508604050s of 11.536386490s, submitted: 13
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:32.524285+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:33.524469+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112619 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:34.524658+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1064960 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46b000/0x0/0x4ffc00000, data 0x94153f/0xa21000, compress 0x0/0x0/0x0, omap 0x12ca4, meta 0x3d5d35c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:35.524830+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94322688 unmapped: 1048576 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:36.524975+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94330880 unmapped: 1040384 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:37.525156+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 2072576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:38.525315+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 2072576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121177 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:39.525508+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 2031616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:40.525641+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb445000/0x0/0x4ffc00000, data 0x964870/0xa47000, compress 0x0/0x0/0x0, omap 0x12ca4, meta 0x3d5d35c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95461376 unmapped: 958464 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:41.525883+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 745472 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.857433319s of 10.014651299s, submitted: 97
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:42.526067+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:43.526211+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 1941504 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135499 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:44.526417+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95584256 unmapped: 1884160 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3c4000/0x0/0x4ffc00000, data 0x9e0112/0xac6000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:45.526639+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 1875968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x9eacc4/0xad1000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:46.526880+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 1867776 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:47.527044+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3ac000/0x0/0x4ffc00000, data 0x9f885a/0xade000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 991232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:48.527294+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 991232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134715 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:49.527542+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 827392 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:50.527700+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96616448 unmapped: 851968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:51.527886+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96616448 unmapped: 851968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb366000/0x0/0x4ffc00000, data 0xa3fab4/0xb26000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:52.528116+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 655360 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:53.528301+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 655360 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139427 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:54.528446+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.366982460s of 12.445398331s, submitted: 44
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97017856 unmapped: 1499136 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb366000/0x0/0x4ffc00000, data 0xa3fab4/0xb26000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:55.528564+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1466368 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:56.528728+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1466368 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:57.528869+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1425408 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:58.529056+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149237 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:59.529254+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb2d4000/0x0/0x4ffc00000, data 0xacf83f/0xbb8000, compress 0x0/0x0/0x0, omap 0x12d7b, meta 0x3d5d285), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:00.529397+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:01.529544+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97460224 unmapped: 3153920 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:02.529760+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fb29d000/0x0/0x4ffc00000, data 0xb04a92/0xbed000, compress 0x0/0x0/0x0, omap 0x12d7b, meta 0x3d5d285), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 3178496 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:03.529962+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 3178496 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157693 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:04.530142+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97615872 unmapped: 2998272 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.291193008s of 10.482179642s, submitted: 116
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:05.530278+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 2637824 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:06.530430+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb217000/0x0/0x4ffc00000, data 0xb8911b/0xc73000, compress 0x0/0x0/0x0, omap 0x12dfc, meta 0x3d5d204), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 98099200 unmapped: 2514944 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:07.530573+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 3121152 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:08.530713+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1851392 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174773 data_alloc: 218103808 data_used: 5091
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:09.530847+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 1802240 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:10.530978+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb19c000/0x0/0x4ffc00000, data 0xc00bf3/0xcee000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99565568 unmapped: 2097152 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:11.531137+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99672064 unmapped: 1990656 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:12.531266+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb16d000/0x0/0x4ffc00000, data 0xc32018/0xd1d000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1843200 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:13.531424+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 2408448 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb147000/0x0/0x4ffc00000, data 0xc594a7/0xd45000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:14.531565+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177381 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 101490688 unmapped: 2269184 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.723609924s of 10.026507378s, submitted: 124
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:15.531724+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 1171456 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb132000/0x0/0x4ffc00000, data 0xc6f06d/0xd59000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:16.531857+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:17.532026+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:18.533382+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:19.533562+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188495 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:20.533704+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:21.533921+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb09d000/0x0/0x4ffc00000, data 0xd007a6/0xdec000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:22.534135+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb09d000/0x0/0x4ffc00000, data 0xd007a6/0xdec000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:23.534281+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 1785856 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:24.534419+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192129 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb067000/0x0/0x4ffc00000, data 0xd37a62/0xe24000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 1785856 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:25.534593+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 1638400 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:26.535638+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.634295464s of 11.785771370s, submitted: 94
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 2170880 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:27.536016+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 2170880 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:28.536349+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:29.536649+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190545 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:30.536853+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb04c000/0x0/0x4ffc00000, data 0xd54161/0xe40000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 151 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:31.537470+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:32.537668+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:33.538061+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:34.538419+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190921 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:35.538855+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:36.539036+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.272990227s of 10.301798820s, submitted: 29
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:37.539275+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:38.539488+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:39.539683+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192613 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:40.539847+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:41.540141+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:42.540370+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:43.540615+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:44.540748+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194161 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:45.541070+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c70/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:46.541279+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:47.541451+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:48.541689+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:49.541880+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195709 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.117662430s of 13.127370834s, submitted: 5
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:50.542054+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:51.542273+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb045000/0x0/0x4ffc00000, data 0xd55e6c/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:52.542615+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:53.542774+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:54.543018+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd55e6e/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199109 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd55e6e/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:55.543231+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:56.543406+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:57.543628+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:58.543833+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:59.543972+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198087 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb045000/0x0/0x4ffc00000, data 0xd55dd4/0xe46000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:00.544090+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.867134094s of 10.890979767s, submitted: 14
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:01.544291+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:02.544425+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:03.544635+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0xd55d37/0xe45000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:04.544767+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197035 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:05.544861+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:06.545040+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:07.545190+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8989 writes, 34K keys, 8989 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8989 writes, 2320 syncs, 3.87 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3286 writes, 10K keys, 3286 commit groups, 1.0 writes per commit group, ingest: 13.71 MB, 0.02 MB/s
                                           Interval WAL: 3286 writes, 1418 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9d/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:08.545355+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:09.545514+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197419 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55c00/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:10.546359+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:11.546507+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.055023193s of 11.093473434s, submitted: 18
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 1187840 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:12.546651+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 1187840 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:13.546771+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:14.546907+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196669 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:15.547073+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:16.547242+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:17.547374+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:18.547548+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:19.547709+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196669 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:20.547848+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:21.547944+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:22.548045+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:23.548176+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:24.548273+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:25.548423+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:26.548703+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.888220787s of 14.927642822s, submitted: 4
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:27.548885+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:28.549034+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:29.549185+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:30.549359+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:31.550379+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:32.552038+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:33.552366+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:34.553643+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a3fef800
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198361 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:35.555036+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 14
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 1105920 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55c4c/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:36.556416+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.001877785s of 10.011025429s, submitted: 5
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:37.556879+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:38.558543+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:39.559483+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197819 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:40.559948+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:41.560247+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:42.560445+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:43.560637+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:44.560874+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197835 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:45.561027+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:46.561225+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.001707077s of 10.005904198s, submitted: 3
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:47.561464+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:48.561720+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:49.561882+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196685 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:50.562073+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:51.562266+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:52.562497+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:53.562730+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:54.562870+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:55.563031+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:56.563271+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.169968605s of 10.174468994s, submitted: 2
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:57.563423+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:58.563632+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:59.563813+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:00.563999+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:01.564250+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:02.564559+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:03.564697+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:04.564859+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196685 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:05.565047+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:06.565243+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:07.565437+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:08.565638+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:09.565795+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:10.565960+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:11.566172+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:12.566370+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:13.566558+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:14.566767+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.824216843s of 17.850557327s, submitted: 6
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:15.566922+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:16.567080+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:17.567261+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:18.567410+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:19.567547+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198361 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:20.567698+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9b/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:21.567882+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:22.568230+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:23.568394+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 1531904 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:24.568543+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103325696 unmapped: 1482752 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199031 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.812626839s of 10.004839897s, submitted: 95
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:25.568680+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 2637824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:26.568883+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:27.569072+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:28.569310+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:29.569522+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198585 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:30.569799+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:31.570004+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:32.570314+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:33.570547+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0xd55c9d/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:34.570747+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 2621440 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201969 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:35.570916+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.108821869s of 10.326163292s, submitted: 42
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103243776 unmapped: 2613248 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9b/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:36.571166+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd578a0/0xe47000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:37.571350+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:38.571517+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd57808/0xe46000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:39.571725+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205847 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd57808/0xe46000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:40.571925+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:41.572172+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:42.572351+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:43.572528+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:44.572666+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208173 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:45.572856+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb041000/0x0/0x4ffc00000, data 0xd59285/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.619541168s of 10.691827774s, submitted: 44
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:46.573155+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:47.573322+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:48.573496+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:49.573690+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207727 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:50.574408+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:51.574617+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:52.574819+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:53.574977+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:54.575187+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208843 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:55.575387+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:56.575628+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884953499s of 11.015766144s, submitted: 6
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:57.575908+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 154 ms_handle_reset con 0x55c0a3fef800 session 0x55c0a3818380
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:58.576186+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 2375680 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:59.576333+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 2375680 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 15
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd59259/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208555 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:00.576527+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 2293760 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd59259/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:01.576665+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:02.576870+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:03.577055+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5ae5e/0xe4c000, compress 0x0/0x0/0x0, omap 0x18dca, meta 0x3d57236), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:04.577212+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215781 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:05.577383+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:06.577543+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03b000/0x0/0x4ffc00000, data 0xd5c8dd/0xe4f000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:07.577742+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03b000/0x0/0x4ffc00000, data 0xd5c8dd/0xe4f000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:08.577970+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:09.578214+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.581476212s of 12.955293655s, submitted: 224
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216753 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:10.578364+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:11.578531+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:12.578697+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:13.578870+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:14.579056+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214487 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:15.579228+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:16.579377+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:17.579503+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:18.579694+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:19.579848+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217965 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:20.579983+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.289826393s of 10.338050842s, submitted: 31
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:21.580167+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:22.580357+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:23.580694+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:24.580826+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222287 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb03a000/0x0/0x4ffc00000, data 0xd5e4e2/0xe52000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:25.581020+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd5ff61/0xe55000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:26.581249+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:27.581382+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:28.581543+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:29.581724+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223979 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:30.581858+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd5fffc/0xe56000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd5fffc/0xe56000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:31.582016+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.204831123s of 11.530242920s, submitted: 36
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:32.582273+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:33.582437+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd60097/0xe57000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:34.582611+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225925 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:35.582752+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:36.582924+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:37.583056+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:38.583255+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd61b66/0xe58000, compress 0x0/0x0/0x0, omap 0x198f2, meta 0x3d5670e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:39.583456+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226641 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:40.583787+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd61acb/0xe57000, compress 0x0/0x0/0x0, omap 0x198f2, meta 0x3d5670e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:41.584048+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:42.584315+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:43.584599+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:44.584793+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226641 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.297651291s of 13.383323669s, submitted: 59
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:45.584962+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:46.585187+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:47.585291+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:48.585490+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:49.585732+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:50.585951+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:51.586289+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:52.586715+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:53.586962+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:54.587323+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:55.587539+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:56.587845+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:57.588008+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:58.588171+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:59.588367+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:00.588546+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:01.588770+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:02.589006+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:03.589189+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:04.589396+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:05.589640+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:06.589960+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:07.590198+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:08.590405+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:09.590610+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:10.590790+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:11.590945+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:12.591196+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:13.591367+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:14.591525+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:15.591651+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:16.591792+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:17.591926+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:18.592089+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:19.592278+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:20.592510+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:21.592648+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:22.592812+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:23.592982+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:24.593149+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.247886658s of 39.155723572s, submitted: 13
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230119 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:25.593292+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:26.593477+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:27.593664+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:28.593855+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:29.594050+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230119 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:30.594270+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:31.594446+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:32.594651+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:33.594798+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:34.594950+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:35.595197+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229415 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:36.595463+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb032000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.258265495s of 12.265155792s, submitted: 3
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:37.595593+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02d000/0x0/0x4ffc00000, data 0xd6514f/0xe5d000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:38.595733+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:39.595866+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:40.595987+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234729 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:41.596168+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:42.596357+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:43.596517+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02e000/0x0/0x4ffc00000, data 0xd651ea/0xe5e000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02e000/0x0/0x4ffc00000, data 0xd651ea/0xe5e000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:44.596951+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:45.597208+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236625 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:46.597357+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:47.597575+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:48.597798+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:49.598030+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02a000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:50.598247+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236625 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:51.598428+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02a000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:52.598635+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.053594589s of 16.302835464s, submitted: 44
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a450f000
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103653376 unmapped: 2203648 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:53.598826+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 16
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: handle_auth_request added challenge on 0x55c0a3655400
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 2072576 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:54.599212+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb029000/0x0/0x4ffc00000, data 0xd66ee7/0xe63000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 2072576 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:55.599571+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240981 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 17
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103792640 unmapped: 2064384 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:56.599770+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103792640 unmapped: 2064384 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:57.599988+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:58.600179+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:59.600395+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:00.600544+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:01.600701+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:02.600895+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:03.601159+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:04.601410+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:05.601571+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:06.601749+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:07.601929+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:08.602078+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:09.602261+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:10.602428+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:11.602581+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:12.602790+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:13.603062+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:14.603289+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:15.603499+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:16.603625+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:17.603754+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:18.603911+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:19.604090+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:20.604259+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:21.604404+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:22.604566+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:23.604705+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:24.604846+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:25.604977+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:26.605137+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:27.605303+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:28.605427+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:29.605548+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:30.605913+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:31.606074+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:32.606314+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:33.606500+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:34.606745+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:35.606901+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:36.607062+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:37.607229+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:38.607378+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:39.607574+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:40.607700+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:41.607824+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:42.608020+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:43.608176+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:44.608301+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:45.608440+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:46.608592+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:47.608724+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:48.608971+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:49.609152+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:50.609372+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:51.609525+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:52.609774+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:53.609955+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:54.610262+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 61.749771118s of 62.368705750s, submitted: 11
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:55.610698+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238939 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:56.610964+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:57.611400+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 ms_handle_reset con 0x55c0a450f000 session 0x55c0a3694a80
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:58.611705+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 ms_handle_reset con 0x55c0a3655400 session 0x55c0a6381500
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104071168 unmapped: 1785856 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:59.612012+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 18
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:00.612236+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238635 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:01.612484+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:02.612818+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:03.613078+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:04.613294+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:05.613521+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238779 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:06.613729+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.142169952s of 11.714550018s, submitted: 184
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:07.614024+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:08.614297+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:09.614597+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:10.614811+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242433 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:11.615021+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:12.721700+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:13.721981+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:14.722142+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:15.722331+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244903 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:16.722549+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:17.722755+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:18.722909+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:19.723048+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:20.723173+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244903 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:21.723283+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:22.723458+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:23.723666+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:24.723795+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.232194901s of 18.285558701s, submitted: 52
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:25.723912+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245047 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:26.724056+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:27.724211+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:28.724439+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:29.724619+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:30.724817+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245047 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:31.724982+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:32.725182+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:33.725393+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:34.725628+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:35.725841+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244343 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:36.726158+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:37.726343+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:38.726517+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:39.726712+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.992785454s of 15.000616074s, submitted: 4
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:40.726925+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244487 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:41.727078+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:42.727317+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:43.727593+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:44.727790+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:45.727935+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244471 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:46.728182+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb00b000/0x0/0x4ffc00000, data 0xd84fc5/0xe81000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:47.728346+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:48.728625+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:49.728815+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fafe9000/0x0/0x4ffc00000, data 0xda606f/0xea3000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.500107765s of 10.000913620s, submitted: 13
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:50.728984+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 1679360 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252421 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:51.729226+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104243200 unmapped: 1613824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:52.729560+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104243200 unmapped: 1613824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:53.729806+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 1417216 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fafe2000/0x0/0x4ffc00000, data 0xdacba7/0xeaa000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:54.729943+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 1417216 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:55.730114+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104448000 unmapped: 1409024 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259093 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:56.730288+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 1335296 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:57.730508+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:58.730693+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:59.730925+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faf67000/0x0/0x4ffc00000, data 0xe28f92/0xf25000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.939127922s of 10.002140999s, submitted: 20
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:00.731131+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105013248 unmapped: 1892352 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faf67000/0x0/0x4ffc00000, data 0xe28f92/0xf25000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255505 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:01.731257+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 1859584 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:02.731440+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 1859584 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:03.731586+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 1794048 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:04.731755+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 1794048 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:05.731934+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 2793472 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260241 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:06.732180+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xea172e/0xf9e000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 2531328 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:07.732334+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 2531328 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:08.732567+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 2310144 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:09.732756+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 2310144 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.055247307s of 10.003301620s, submitted: 24
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:10.732913+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 2891776 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264103 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:11.733067+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:12.733321+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 165 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0xee3272/0xfe1000, compress 0x0/0x0/0x0, omap 0x1a9e1, meta 0x3d5561f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 165 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0xee3272/0xfe1000, compress 0x0/0x0/0x0, omap 0x1a9e1, meta 0x3d5561f), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:13.733476+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:14.733633+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _renew_subs
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:15.733814+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 2727936 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:16.733962+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 2695168 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:17.734188+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 2695168 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:18.734325+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:19.734457+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:20.734606+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:21.734758+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:22.734928+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:23.735089+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:24.735293+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:25.735680+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:26.735989+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:27.736270+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:28.736519+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:29.736665+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:30.736987+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:31.737182+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:32.737357+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:33.737500+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:34.737679+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:35.737849+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:36.738016+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:37.738203+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:38.738360+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:39.738505+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:40.738674+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:41.738887+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:42.739147+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:43.739318+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:44.739494+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:45.739698+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:46.739913+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:47.740053+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:48.740204+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:49.740360+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:50.740567+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:51.740712+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:52.740930+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:53.741134+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:54.741303+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:55.741452+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:56.741629+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:57.741787+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:58.741982+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:59.742146+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:00.742292+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:01.742422+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:02.742590+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:03.742727+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:04.742872+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:05.743010+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:06.743142+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:07.743294+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:08.743471+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:09.743658+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:10.743836+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:11.744017+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:12.744318+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:13.744466+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:14.744600+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:15.744795+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:16.744959+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:17.745087+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:18.745228+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:19.745360+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:20.745482+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:21.745629+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:22.745826+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:23.745960+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'config show' '{prefix=config show}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105717760 unmapped: 2236416 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:24.746135+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 3325952 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:25.746285+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 3301376 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:26.746427+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'log dump' '{prefix=log dump}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 14344192 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'perf dump' '{prefix=perf dump}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:27.746595+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'perf schema' '{prefix=perf schema}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.438171387s of 77.488555908s, submitted: 40
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105947136 unmapped: 14098432 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:28.746720+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc66e/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105947136 unmapped: 14098432 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 19
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:29.746840+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 ms_handle_reset con 0x55c0a2e4e400 session 0x55c0a5f06700
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 13877248 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:30.746977+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 13877248 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:31.747159+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 13877248 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Got map version 20
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:32.747317+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:33.747476+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:34.747608+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:35.747746+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:36.747870+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:37.748027+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:38.748157+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:39.748290+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:40.748417+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:41.748550+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:42.748746+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:43.748871+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:44.749034+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:45.749171+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:46.749300+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:47.749438+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:48.749590+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:49.749725+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:50.749876+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:51.750027+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:52.750217+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:53.750373+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:54.750500+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:55.750631+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:56.750758+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:57.750941+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:58.751072+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:59.751148+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:00.751280+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:01.751409+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:02.751706+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:03.752065+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:04.752430+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:05.752616+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:06.752930+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:07.753144+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:08.753385+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:09.753577+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:10.753729+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:11.753880+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:12.754687+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:13.754852+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:14.755031+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:15.755187+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:16.755408+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:17.756540+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:18.757005+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:19.757192+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:20.757566+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:21.757762+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:22.758295+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:23.758655+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:24.758964+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:25.759229+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:26.759456+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:27.759636+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:28.759932+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:29.760093+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:30.760352+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:31.760537+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:32.760821+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:33.761007+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:34.761558+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:35.761717+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:36.761868+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:37.761946+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:38.762088+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:39.762256+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:40.762436+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:41.762633+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:42.762850+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:43.763032+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:44.763202+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:45.763379+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:46.763679+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:47.763884+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:48.764040+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:49.764175+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:50.764345+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:51.764488+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:52.764615+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:53.764760+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:54.764936+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:55.765184+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:56.765332+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:57.765457+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:58.765651+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:59.765787+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:00.765920+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:01.766059+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:02.766275+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:03.766472+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:04.766618+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:05.766770+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:06.767480+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:07.767710+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:08.767950+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:09.768239+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:10.768624+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:11.768908+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:12.769155+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:13.769308+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:14.769550+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:15.769685+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:16.769841+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:17.770138+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:18.770333+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:19.770489+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:20.770623+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:21.770764+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:22.770952+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:23.771127+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:24.771290+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:25.771473+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:26.771664+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:27.771863+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:28.772044+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:29.772196+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:30.772341+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:31.772722+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:32.772894+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:33.773063+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:34.773243+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:35.773491+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:36.773696+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:37.773873+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:38.774041+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:39.774194+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:40.774449+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:41.774701+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:42.774947+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:43.775205+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:44.775456+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:45.775706+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:46.775877+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:47.776269+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:48.776434+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:49.776668+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:50.776885+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:51.777058+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:52.777246+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:53.777416+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:54.777624+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:55.777787+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:56.777995+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:57.778193+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:58.778457+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:59.778621+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:00.778759+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:01.779030+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:02.779354+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:03.779532+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:04.779720+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:05.779891+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:06.780233+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:07.780399+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2835 syncs, 3.75 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1646 writes, 3739 keys, 1646 commit groups, 1.0 writes per commit group, ingest: 2.53 MB, 0.00 MB/s
                                           Interval WAL: 1646 writes, 515 syncs, 3.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:08.780664+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:09.780930+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:10.781153+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:11.781628+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:12.781975+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:13.782306+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:14.782580+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:15.782807+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:16.783135+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:17.783443+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:18.783747+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:19.783936+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:20.784448+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:21.784835+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:22.785214+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:23.785392+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:24.785706+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:25.786023+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:26.786307+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:27.786505+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:28.786714+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:29.786905+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:30.787066+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:31.787263+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:32.787514+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:33.787775+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:34.787945+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:35.788178+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:36.788328+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:37.788469+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:38.788634+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:39.788768+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:40.788908+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:41.789080+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:42.789377+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:43.789549+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:44.789682+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:45.789885+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:46.790058+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:47.790191+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:48.790314+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:49.790466+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:50.790567+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:51.790727+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:52.790902+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:53.791144+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:54.791303+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 13787136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:55.791469+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 13787136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:56.791598+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 13787136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:57.791805+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:58.791941+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:59.792071+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:00.792154+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:01.792281+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:02.792451+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:03.792575+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:04.792717+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:05.792850+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:06.793027+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:07.793173+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:08.793358+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:09.793501+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:10.793643+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:11.793795+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:12.793974+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:13.794154+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:14.794340+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:15.794543+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:16.794690+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:17.794986+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:18.795442+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:19.795796+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:20.796025+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:21.796250+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:22.796503+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 235.305908203s of 235.333023071s, submitted: 162
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:23.796830+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:24.796952+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:25.797163+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:26.797384+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 13746176 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:27.797541+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 12681216 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:28.797722+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 12664832 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:29.797905+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:30.798038+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:31.798195+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:32.798515+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:33.798659+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:34.798778+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:35.799004+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:36.799152+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:37.799348+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:38.799497+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:39.799748+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:40.800046+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:41.800228+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:42.800444+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:43.800641+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:44.800801+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:45.800946+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:46.801222+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:47.801421+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:48.801618+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:49.801793+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:50.801938+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:51.802077+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 12607488 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:52.802329+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 12607488 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:53.802485+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 12607488 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:54.802610+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 12607488 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:55.802754+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:56.802917+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:57.803067+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:58.803208+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:59.803332+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:00.803468+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:01.803656+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:02.803824+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:03.803916+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:04.804014+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:05.804135+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:06.804273+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:07.804400+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:08.804592+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:09.809587+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:10.809733+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:11.810033+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:12.810231+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:13.810419+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:14.810555+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:15.810711+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:16.841000+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:17.841155+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:18.841404+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:19.841865+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:20.842332+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:21.842517+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:22.842758+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:23.843244+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:24.843689+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:25.844271+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:26.844417+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:27.844725+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:28.845003+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:29.845410+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:30.845671+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:31.845885+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:32.846278+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:33.846505+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:34.846710+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:35.846910+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:36.847161+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:37.847370+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:38.847589+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:39.847768+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:40.848173+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:41.848300+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:42.848548+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:43.848700+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:44.848897+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:45.849048+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:46.849234+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:47.849405+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:48.849550+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:49.849757+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:50.849989+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:51.850180+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:52.850366+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:53.850521+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:54.850645+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:55.851231+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:56.851363+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:57.851448+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:58.851569+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:59.851707+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:00.851818+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:01.851962+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:02.852158+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:03.852285+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:04.852434+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:05.852574+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:06.852707+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:07.852854+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:08.853011+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:09.853164+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:10.853288+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:11.853469+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:12.853652+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:13.853787+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:14.853919+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:15.854044+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:16.854177+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:17.854362+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:18.854524+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:19.854661+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:20.854791+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:21.855336+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:22.855554+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:23.855726+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:24.855882+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:25.856155+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:26.856475+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:27.856743+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:28.856888+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:29.857082+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:30.857355+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:31.857603+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:32.857758+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:33.857909+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:34.858053+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:35.858201+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:36.858394+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:37.858586+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:38.858712+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:39.858865+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:40.859051+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:41.859216+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:42.859391+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:43.859651+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:44.859834+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:45.860004+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:46.860152+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:47.861183+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:48.861339+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:49.861581+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:50.861730+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:51.861904+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:52.862295+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:53.862423+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:54.862562+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:55.862683+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:56.862804+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:57.862958+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:58.863185+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:59.863378+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:00.863527+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:01.863633+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:02.863802+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:03.863953+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:04.864156+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:05.864309+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:06.864427+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:07.864587+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:08.864707+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:09.864857+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:10.864981+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:11.865130+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:12.865330+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:13.865458+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:14.865620+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:15.865744+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:16.866015+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:17.866203+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:18.866392+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:19.866500+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:20.866630+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:21.866774+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:22.866952+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:23.867123+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:24.867257+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:25.867409+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:26.867566+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:27.867795+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:28.867963+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:29.868119+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:30.869057+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:31.869526+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:32.870850+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:33.872230+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:34.872381+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:35.872626+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:36.872897+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 04 11:02:00 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010535752' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:37.873159+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:38.873337+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:39.873409+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:40.873582+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:41.873791+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:42.874005+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:43.874180+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:44.874446+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:45.874650+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:46.874810+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:47.875076+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:48.875260+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:49.875411+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:50.875611+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:51.875809+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:52.876021+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:53.876208+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:54.876353+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:55.876580+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:56.876749+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:57.876986+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:58.877157+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:59.877313+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:00.877508+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:01.877656+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:02.877851+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:03.878011+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:04.878145+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:05.878282+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:06.878468+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets getting new tickets!
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:07.878780+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _finish_auth 0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:07.879881+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:08.878913+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:09.879135+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:10.879327+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:11.879556+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:12.879804+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:13.879951+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:14.880137+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:15.880276+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc ms_handle_reset ms_handle_reset con 0x55c0a5b1a800
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: get_auth_request con 0x55c0a6259400 auth_method 0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: mgrc handle_mgr_configure stats_period=5
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:16.880452+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:17.880590+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:18.880738+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:19.880885+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:20.881060+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:21.881225+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:22.881410+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:23.881564+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:24.881690+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:25.881841+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:26.882023+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:27.882202+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:28.882406+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:29.882529+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:30.882671+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:31.883241+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:32.883441+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:33.883596+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:34.883843+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:35.884064+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:36.884276+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:37.884458+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:38.884622+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:39.884840+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:40.885002+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:41.885167+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:42.885384+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:43.885615+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:44.885924+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:45.886219+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:46.886466+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:47.886690+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:48.886887+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:49.887149+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:50.887388+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:51.887796+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:52.888143+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:53.888410+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:54.888648+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:55.888897+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:56.889223+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:57.889558+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:58.889739+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:59.889896+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:00.890045+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:01.890189+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:02.890355+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:03.890558+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:04.890772+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:05.890909+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:06.891053+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:07.891170+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:08.891411+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:09.891617+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:10.891759+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:11.891928+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:12.892148+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:13.892311+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:14.892472+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:15.892737+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:16.892966+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:17.893294+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:18.893527+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:19.893724+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:20.893928+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:21.894191+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:22.894428+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 298.117980957s of 300.469848633s, submitted: 90
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:23.894595+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 12599296 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:24.894792+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 12599296 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:25.894973+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 12599296 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:26.895170+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'config show' '{prefix=config show}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 12640256 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}'
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:27.895359+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 12566528 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:28.895498+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:00 compute-0 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:00 compute-0 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec 04 11:02:00 compute-0 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 12795904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: tick
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_tickets
Dec 04 11:02:00 compute-0 ceph-osd[88205]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:29.895641+0000)
Dec 04 11:02:00 compute-0 ceph-osd[88205]: do_command 'log dump' '{prefix=log dump}'
Dec 04 11:02:00 compute-0 nova_compute[244644]: 2025-12-04 11:02:00.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:02:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14896 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:00 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:02:00 compute-0 ceph-mon[75358]: from='client.14886 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:00 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2506140270' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 04 11:02:00 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2010535752' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 04 11:02:00 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 04 11:02:00 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/678272308' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 04 11:02:00 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14900 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec 04 11:02:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 11:02:01 compute-0 nova_compute[244644]: 2025-12-04 11:02:01.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:02:01 compute-0 nova_compute[244644]: 2025-12-04 11:02:01.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 04 11:02:01 compute-0 nova_compute[244644]: 2025-12-04 11:02:01.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 04 11:02:01 compute-0 nova_compute[244644]: 2025-12-04 11:02:01.356 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 04 11:02:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 04 11:02:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989233441' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 04 11:02:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14904 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec 04 11:02:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 11:02:01 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14908 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:01 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 04 11:02:01 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075459707' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 04 11:02:02 compute-0 ceph-mon[75358]: from='client.14892 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:02 compute-0 ceph-mon[75358]: from='client.14896 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:02 compute-0 ceph-mon[75358]: pgmap v1583: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:02:02 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/678272308' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 04 11:02:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 11:02:02 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1989233441' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 04 11:02:02 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 11:02:02 compute-0 nova_compute[244644]: 2025-12-04 11:02:02.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:02:02 compute-0 nova_compute[244644]: 2025-12-04 11:02:02.379 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 11:02:02 compute-0 nova_compute[244644]: 2025-12-04 11:02:02.380 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 11:02:02 compute-0 nova_compute[244644]: 2025-12-04 11:02:02.380 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 11:02:02 compute-0 nova_compute[244644]: 2025-12-04 11:02:02.385 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 04 11:02:02 compute-0 nova_compute[244644]: 2025-12-04 11:02:02.386 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 11:02:02 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:02:02 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14912 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:02 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 04 11:02:02 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318634962' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 04 11:02:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 11:02:03 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726729182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:02:03 compute-0 nova_compute[244644]: 2025-12-04 11:02:03.036 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 11:02:03 compute-0 ceph-mon[75358]: from='client.14900 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:03 compute-0 ceph-mon[75358]: from='client.14904 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:03 compute-0 ceph-mon[75358]: from='client.14908 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1075459707' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 04 11:02:03 compute-0 ceph-mon[75358]: pgmap v1584: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:02:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2318634962' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 04 11:02:03 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/726729182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:02:03 compute-0 nova_compute[244644]: 2025-12-04 11:02:03.210 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 04 11:02:03 compute-0 nova_compute[244644]: 2025-12-04 11:02:03.211 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4802MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 04 11:02:03 compute-0 nova_compute[244644]: 2025-12-04 11:02:03.211 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 04 11:02:03 compute-0 nova_compute[244644]: 2025-12-04 11:02:03.212 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 04 11:02:03 compute-0 nova_compute[244644]: 2025-12-04 11:02:03.333 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 04 11:02:03 compute-0 nova_compute[244644]: 2025-12-04 11:02:03.334 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 04 11:02:03 compute-0 nova_compute[244644]: 2025-12-04 11:02:03.389 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 04 11:02:03 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14916 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:03 compute-0 crontab[276650]: (root) LIST (root)
Dec 04 11:02:03 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 04 11:02:03 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2185841392' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 04 11:02:03 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14922 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:03 compute-0 podman[276746]: 2025-12-04 11:02:03.971683845 +0000 UTC m=+0.073069825 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 04 11:02:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 04 11:02:04 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475921126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:02:04 compute-0 nova_compute[244644]: 2025-12-04 11:02:04.034 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 04 11:02:04 compute-0 nova_compute[244644]: 2025-12-04 11:02:04.039 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 04 11:02:04 compute-0 ceph-mon[75358]: from='client.14912 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:04 compute-0 ceph-mon[75358]: from='client.14916 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2185841392' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 04 11:02:04 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/475921126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 04 11:02:04 compute-0 nova_compute[244644]: 2025-12-04 11:02:04.160 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 04 11:02:04 compute-0 nova_compute[244644]: 2025-12-04 11:02:04.178 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 04 11:02:04 compute-0 nova_compute[244644]: 2025-12-04 11:02:04.178 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 04 11:02:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 04 11:02:04 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2293761273' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 04 11:02:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14926 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:04 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:02:04 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:02:04 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14930 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:50.388773+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:51.388948+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:52.389089+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:53.389288+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:54.389567+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:55.389692+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:56.389826+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:57.389961+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:58.390086+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:59.390237+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:00.390377+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:01.390532+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:02.390675+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:03.390928+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:04.391078+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:05.391241+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:06.391383+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:07.391523+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:08.391650+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:09.391830+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:10.391983+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:11.392176+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:12.392341+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:13.392571+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:14.392724+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:15.392858+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:16.393012+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:17.393203+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:18.393346+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:19.393473+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:20.393625+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:21.393765+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:22.393903+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:23.394060+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:24.394179+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:25.394299+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:26.394428+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:27.394584+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:28.394717+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:29.394867+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:30.395012+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:31.395194+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:32.395341+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:33.395510+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:34.395666+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:35.395839+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:36.395994+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:37.396165+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:38.396302+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:39.396453+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:40.396588+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:41.396733+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:42.396955+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:43.397202+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:44.397394+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:45.397530+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:46.397707+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:47.397930+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:48.398244+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:49.398425+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:50.398636+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:51.398839+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:52.399080+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:53.399336+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:54.399585+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:55.399763+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:56.399917+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:57.400183+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:58.400386+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:59.400568+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:00.400839+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:01.400996+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:02.401180+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:03.401591+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:04.401767+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:05.401885+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:06.402017+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:07.402154+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:08.402285+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:09.402429+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:10.402582+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:11.402720+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:12.402857+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:13.403053+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 ms_handle_reset con 0x5590067fb800 session 0x559004f09340
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559009534400
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 ms_handle_reset con 0x5590071f1800 session 0x5590071bafc0
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559007746000
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:14.403312+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:15.403479+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:16.403620+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:17.403796+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:18.403932+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:19.404318+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:20.404553+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:21.404817+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:22.405241+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:23.405540+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:24.405794+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:25.406073+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:26.406303+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:27.406483+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:28.406679+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:29.406856+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:30.407030+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:31.407191+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:32.407427+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:33.407676+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:34.407921+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:35.408141+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:36.408319+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:37.408498+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:38.408729+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:39.408941+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:40.409142+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:41.409374+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:42.409551+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:43.409757+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:44.409885+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:45.410069+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:46.410266+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:47.410393+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:48.410544+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:49.410737+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:50.410922+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:51.411107+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:52.411306+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:53.411840+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:54.411958+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:55.412079+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:56.412251+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:57.412384+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:58.412575+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:59.412736+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:00.412894+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:01.413052+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:02.413176+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:03.413361+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:04.413508+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:05.413700+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:06.413873+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:07.414203+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:08.414371+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:09.414504+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:10.414660+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:11.414808+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:12.414936+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:13.415165+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:14.415344+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:15.415474+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:16.415627+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:17.415741+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:18.416141+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:19.416332+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:20.416544+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:21.416762+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:22.416892+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.065643311s of 300.198425293s, submitted: 90
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:23.417044+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:24.417219+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:25.417379+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:26.417563+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:27.417754+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:28.417902+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:29.418055+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:30.418189+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:31.418327+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:32.418482+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:33.418639+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:34.418800+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:35.418948+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:36.419167+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:37.419348+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:38.419550+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:39.419697+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:40.419888+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:41.420043+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:42.420154+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:43.420299+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:44.420473+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:45.420604+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:46.420730+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:47.420891+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:48.421025+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:49.421170+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:50.421300+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:51.421461+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:52.421592+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:53.421742+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:54.421892+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:55.422035+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:56.422189+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:57.422405+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:58.423081+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:59.423335+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:00.423472+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:01.423594+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:02.423742+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:03.423966+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:04.424380+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:05.424532+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:06.424696+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:07.424841+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:08.424970+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:09.425161+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:10.425316+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:11.425493+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:12.425632+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:13.425807+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:14.425989+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:15.426167+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:16.426308+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:17.426444+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:18.426603+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:19.426738+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:20.426874+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:21.427077+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:22.427270+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:23.427455+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:24.427584+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:25.427724+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:26.427887+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:27.428061+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:28.428269+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:29.428418+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:30.428627+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:31.428764+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:32.428929+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:33.429111+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:34.429259+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:35.429383+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:36.429547+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:37.429660+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:38.429797+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:39.429890+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:40.430015+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:41.430159+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:42.430305+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:43.430399+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:44.430515+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:45.430625+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:46.430763+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:47.430958+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:48.431111+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:49.431343+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:50.431496+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:51.431631+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:52.431750+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:53.431940+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:54.432084+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:55.432237+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:56.432391+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:57.432530+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:58.432691+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:59.432822+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:00.432991+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:01.433177+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:02.433317+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:03.433512+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:04.433642+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:05.433776+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:06.433945+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:07.434085+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:08.434227+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:09.434435+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:10.434615+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:11.434804+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:12.434970+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:13.435187+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:14.435480+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:15.435685+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:16.435867+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:17.436018+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:18.436211+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:19.436408+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:20.436559+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:21.436726+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:22.436877+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:23.437074+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:24.437198+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:25.437369+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:26.437519+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:27.437634+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:28.437774+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:29.437974+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:30.438192+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:31.438395+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:32.438554+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:33.438729+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:34.438947+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:35.439251+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:36.439441+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:37.439564+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:38.439744+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:39.439876+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:40.440020+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:41.440199+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:42.440379+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:43.440607+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:44.440777+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:45.440936+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:46.441071+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:47.441193+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:48.441394+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:49.441577+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:50.441748+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:51.441948+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:52.442155+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:53.442339+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:54.442739+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:55.442906+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:56.443064+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:57.443201+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:58.443348+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:59.443500+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:00.443625+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:01.443814+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:02.443969+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:03.444155+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:04.444389+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:05.444565+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:06.444739+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:07.444886+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:08.445037+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:09.445253+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:10.445375+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:11.445509+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:12.445650+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:13.445862+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:14.446002+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:15.446238+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:16.446391+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:17.446530+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:18.446692+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:19.446832+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000037s
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:20.446989+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:21.447165+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:22.447324+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:23.447468+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:24.447621+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:25.447760+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:26.447895+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:27.448432+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:28.448602+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:29.448749+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:30.448930+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:31.449060+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:32.449228+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:33.449400+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:34.449553+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:35.449694+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:36.449820+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:37.449954+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:38.450123+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:39.450270+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:40.450408+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:41.450597+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:42.450724+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:43.450924+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:44.451190+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:45.451313+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:46.451472+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:47.452078+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:48.452648+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:49.453509+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:50.454010+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:51.454333+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:52.454778+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:53.455169+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:54.455520+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:55.455743+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:56.455985+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:57.456124+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:58.456450+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Cumulative writes: 7142 writes, 28K keys, 7142 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7142 writes, 1395 syncs, 5.12 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.201       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:59.456696+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:00.456915+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:01.457135+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:02.457280+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:03.457458+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:04.457599+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:05.457746+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:06.457870+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:07.458027+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:08.458229+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:09.458471+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:10.458713+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:11.458909+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:12.459084+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:13.459286+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:14.459407+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:15.459603+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:16.459871+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:17.460069+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:18.460268+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:19.460444+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:20.460596+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:21.460799+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:22.460951+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:23.461170+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:24.461316+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:25.461437+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:26.461639+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:27.461809+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:28.461903+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:29.462077+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:30.462316+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:31.462465+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:32.462608+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:33.462803+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:34.462964+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:35.463154+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:36.463326+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:37.463463+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:38.463684+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:39.463889+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:40.464062+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:41.464245+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:42.464397+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:43.464613+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:44.464738+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:45.464897+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:46.465086+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:47.465331+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:48.465487+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:49.465659+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:50.465824+0000)
Dec 04 11:02:04 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:51.466019+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:52.466182+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:53.466391+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:54.466569+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:55.466726+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:56.466919+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:57.467091+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:58.467383+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:59.467501+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:00.467645+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:01.467792+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:02.467956+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:03.468190+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:04.468343+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:05.468550+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:06.468749+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:07.468998+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:08.469141+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:09.469417+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:10.469629+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:11.469871+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:12.470047+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:13.470291+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:14.470427+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:15.470561+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:16.470702+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:17.470891+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:18.471048+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:19.471224+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:20.471373+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:21.471529+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:22.471693+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.836059570s of 299.876525879s, submitted: 22
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:23.471843+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:24.471974+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:25.472132+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:26.472260+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:27.472382+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:28.472513+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:29.472625+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:30.472767+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:31.472928+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:32.473176+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:33.473395+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:34.473524+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:35.473684+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:36.473811+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:37.473985+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:38.474082+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:39.474314+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:40.474446+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:41.474664+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:42.474818+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:43.475005+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:44.475148+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:45.475325+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:46.475502+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:47.475642+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:48.475785+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:49.475922+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:50.476037+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:51.476284+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:52.476429+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:53.476605+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:54.476780+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:55.476964+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:56.477522+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:57.478043+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:58.478533+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:59.478683+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:00.479089+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:01.479583+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:02.479825+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:03.480151+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:04.480491+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:05.480652+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:06.480925+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:07.481180+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:08.481340+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:09.481573+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:10.481733+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:11.481962+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:12.482206+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:13.482464+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:14.482781+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:15.482980+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:16.483179+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:17.483392+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:18.483623+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:19.483784+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:20.483958+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:21.484156+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:22.484316+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:23.484589+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:24.484735+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:25.484933+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:26.485118+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:27.485358+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:28.485543+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:29.485671+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:30.485809+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:31.485950+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:32.486127+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:33.486859+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:34.486982+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:35.487167+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:36.487316+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:37.487525+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:38.487676+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:39.578043+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:40.578182+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:41.578354+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:42.578502+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:43.578730+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:44.578952+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:45.579149+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:46.579317+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:47.579412+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:48.579534+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:49.579661+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:50.579861+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:51.580038+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:52.580265+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:53.580551+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:54.580734+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:55.580968+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:56.581180+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:57.581370+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:58.581571+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:59.581715+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:00.581889+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:01.582293+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:02.583383+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:03.584650+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:04 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:04 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:04.585827+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:05.586617+0000)
Dec 04 11:02:04 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:04 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:06.587145+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:07.587378+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:08.587549+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:09.587783+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:10.587946+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:11.588305+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:12.588665+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:13.589023+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:14.589356+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:15.589581+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:16.589744+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:17.589968+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:18.590172+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:19.590417+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008cfc000
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 116.996520996s of 117.133117676s, submitted: 90
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 753664 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:20.590639+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce52000/0x0/0x4ffc00000, data 0x11abd4/0x1d8000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 745472 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:21.590844+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 17481728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 120 ms_handle_reset con 0x559008cfc000 session 0x559009955340
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:22.591009+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008d7f400
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 90808320 unmapped: 8806400 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:23.591183+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136303 data_alloc: 218103808 data_used: 5976
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 17096704 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 121 ms_handle_reset con 0x559008d7f400 session 0x559008e3d880
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x191e3ca/0x19e1000, compress 0x0/0x0/0x0, omap 0x106c6, meta 0x2bbf93a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:24.591370+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:25.591617+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:26.591971+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:27.592192+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:28.592380+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141665 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:29.592700+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:30.592921+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:31.593234+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:32.593493+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:33.593718+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144007 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:34.593878+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:35.594036+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:36.594217+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:37.594622+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:38.594768+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144007 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:39.594954+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:40.595120+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:41.595265+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:42.595480+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.466564178s of 22.644350052s, submitted: 41
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:43.595717+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 10
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143287 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:44.595897+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb644000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x111b8, meta 0x2bbee48), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:45.596058+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb644000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x111b8, meta 0x2bbee48), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008cf0400
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:46.596175+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 16064512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:47.596359+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 16064512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:48.596535+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144979 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:49.596704+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb643000/0x0/0x4ffc00000, data 0x1921abf/0x19e9000, compress 0x0/0x0/0x0, omap 0x11671, meta 0x2bbe98f), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:50.596839+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:51.596983+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:52.597189+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb643000/0x0/0x4ffc00000, data 0x1921abf/0x19e9000, compress 0x0/0x0/0x0, omap 0x116d5, meta 0x2bbe92b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.574191093s of 10.002868652s, submitted: 9
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:53.597363+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 11
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149895 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:54.597557+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:55.597733+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb641000/0x0/0x4ffc00000, data 0x1921c53/0x19ea000, compress 0x0/0x0/0x0, omap 0x11be1, meta 0x2bbe41f), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 122 handle_osd_map epochs [122,123], i have 123, src has [1,123]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:56.597982+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:57.598190+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:58.598340+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb63b000/0x0/0x4ffc00000, data 0x19239f3/0x19ef000, compress 0x0/0x0/0x0, omap 0x1244f, meta 0x2bbdbb1), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153339 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:59.598481+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:00.598625+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb63d000/0x0/0x4ffc00000, data 0x1923a58/0x19ef000, compress 0x0/0x0/0x0, omap 0x125d3, meta 0x2bbda2d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:01.598777+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:02.598945+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.307727814s of 10.003003120s, submitted: 55
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:03.599188+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157951 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:04.599387+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:05.599597+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb638000/0x0/0x4ffc00000, data 0x19255a1/0x19f2000, compress 0x0/0x0/0x0, omap 0x13360, meta 0x2bbcca0), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:06.599781+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:07.599954+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:08.600215+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156927 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:09.600380+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x192566b/0x19f2000, compress 0x0/0x0/0x0, omap 0x13585, meta 0x2bbca7b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:10.600502+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:11.600673+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 14950400 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:12.600815+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:13.600983+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13a6f, meta 0x2bbc591), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158173 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:14.601199+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:15.601363+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.946354866s of 13.003514290s, submitted: 32
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:16.601528+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13b4d, meta 0x2bbc4b3), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:17.601656+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:18.601816+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157455 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:19.601971+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13d95, meta 0x2bbc26b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 14876672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:20.602142+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 14876672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:21.602344+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x192589a/0x19f3000, compress 0x0/0x0/0x0, omap 0x13f02, meta 0x2bbc0fe), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:22.602487+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:23.602654+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158813 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:24.602809+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 14983168 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:25.602935+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 14983168 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.955944061s of 10.002535820s, submitted: 20
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:26.603066+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb63c000/0x0/0x4ffc00000, data 0x192585d/0x19f0000, compress 0x0/0x0/0x0, omap 0x14225, meta 0x2bbbddb), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:27.603202+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:28.603353+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163967 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:29.603504+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:30.603648+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb638000/0x0/0x4ffc00000, data 0x19275c7/0x19f4000, compress 0x0/0x0/0x0, omap 0x149f5, meta 0x2bbb60b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:31.603819+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:32.603943+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:33.604126+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 125 handle_osd_map epochs [125,126], i have 126, src has [1,126]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x19275f6/0x19f3000, compress 0x0/0x0/0x0, omap 0x14b15, meta 0x2bbb4eb), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165991 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:34.604239+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb634000/0x0/0x4ffc00000, data 0x19290da/0x19f6000, compress 0x0/0x0/0x0, omap 0x14d43, meta 0x2bbb2bd), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:35.604364+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.678595543s of 10.002803802s, submitted: 82
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:36.604506+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:37.604636+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:38.604754+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb636000/0x0/0x4ffc00000, data 0x1929209/0x19f6000, compress 0x0/0x0/0x0, omap 0x1422b, meta 0x2bbbdd5), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166245 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:39.604883+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:40.605023+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:41.605184+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:42.605326+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb636000/0x0/0x4ffc00000, data 0x1929209/0x19f6000, compress 0x0/0x0/0x0, omap 0x14347, meta 0x2bbbcb9), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:43.605478+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165655 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:44.605629+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:45.605754+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:46.605904+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb637000/0x0/0x4ffc00000, data 0x1929238/0x19f5000, compress 0x0/0x0/0x0, omap 0x1457f, meta 0x2bbba81), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:47.606041+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:48.606299+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.570578575s of 13.004203796s, submitted: 16
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:49.606610+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165495 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:50.606885+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:51.607192+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb637000/0x0/0x4ffc00000, data 0x1929238/0x19f5000, compress 0x0/0x0/0x0, omap 0x14627, meta 0x2bbb9d9), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008d81400
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:52.607335+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 14770176 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb635000/0x0/0x4ffc00000, data 0x19293e5/0x19f7000, compress 0x0/0x0/0x0, omap 0x14747, meta 0x2bbb8b9), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 12
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:53.607630+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 14696448 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:54.607900+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170315 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 14688256 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:55.608143+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb635000/0x0/0x4ffc00000, data 0x19296ba/0x19f7000, compress 0x0/0x0/0x0, omap 0x14867, meta 0x2bbb799), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:56.608381+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb631000/0x0/0x4ffc00000, data 0x192b35a/0x19fb000, compress 0x0/0x0/0x0, omap 0x14ad6, meta 0x2bbb52a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:57.608597+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb631000/0x0/0x4ffc00000, data 0x192b35a/0x19fb000, compress 0x0/0x0/0x0, omap 0x14ad6, meta 0x2bbb52a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:58.608848+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887221336s of 10.005003929s, submitted: 46
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:59.609081+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173441 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 14647296 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:00.609368+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 14647296 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:01.609660+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 14622720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fb62d000/0x0/0x4ffc00000, data 0x192d066/0x19fd000, compress 0x0/0x0/0x0, omap 0x14fe9, meta 0x2bbb017), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:02.609939+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 14540800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:03.610184+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 14540800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:04.610446+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183767 data_alloc: 218103808 data_used: 6561
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 14516224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fb624000/0x0/0x4ffc00000, data 0x1930971/0x1a02000, compress 0x0/0x0/0x0, omap 0x15745, meta 0x2bba8bb), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:05.610598+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 12394496 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:06.610841+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 12394496 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:07.611194+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 12361728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:08.611396+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 12345344 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb61f000/0x0/0x4ffc00000, data 0x19360be/0x1a0b000, compress 0x0/0x0/0x0, omap 0x1637b, meta 0x2bb9c85), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.471419334s of 10.002448082s, submitted: 188
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:09.611566+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194735 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 12328960 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:10.611743+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 12328960 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:11.611940+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 12312576 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:12.612171+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 12263424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:13.612384+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 12263424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x1939a3d/0x1a12000, compress 0x0/0x0/0x0, omap 0x16e2c, meta 0x2bb91d4), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:14.612524+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199515 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:15.612773+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:16.612934+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:17.613287+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:18.613476+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.264224052s of 10.198055267s, submitted: 77
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:19.613668+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198365 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b56b/0x1a14000, compress 0x0/0x0/0x0, omap 0x1841a, meta 0x2bb7be6), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:20.613878+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:21.614190+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:22.614453+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:23.614698+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:24.614863+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199897 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:25.615180+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:26.615335+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:27.615507+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:28.615707+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.554255486s of 10.002036095s, submitted: 6
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:29.615873+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199753 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:30.616015+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b66b/0x1a15000, compress 0x0/0x0/0x0, omap 0x189ae, meta 0x2bb7652), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:31.616153+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:32.616313+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:33.616489+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:34.616649+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:35.616804+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:36.617029+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:37.617234+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:38.617444+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:39.617597+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:40.617749+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:41.617930+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:42.618203+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.060717583s of 14.003334045s, submitted: 8
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:43.618418+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:44.618567+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:45.618746+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ace7, meta 0x2bb5319), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:46.618886+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:47.619203+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:48.619430+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:49.619566+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:50.619719+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:51.619852+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:52.620000+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.796292305s of 10.001618385s, submitted: 11
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:53.620208+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:54.620356+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:55.620485+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:56.620677+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:57.620839+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 ms_handle_reset con 0x559008d81400 session 0x55900771efc0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:58.621010+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 13
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:59.621216+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:00.621352+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:01.621486+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10780672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:02.621622+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.715125084s of 10.001788139s, submitted: 197
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b864/0x1a15000, compress 0x0/0x0/0x0, omap 0x1b9f0, meta 0x2bb4610), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:03.621816+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:04.622026+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200871 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:05.622188+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 10764288 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:06.622386+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 10764288 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:07.622547+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 10747904 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:08.622771+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b993/0x1a15000, compress 0x0/0x0/0x0, omap 0x1c126, meta 0x2bb3eda), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 10731520 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:09.622927+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200281 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:10.623079+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:11.623255+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:12.623409+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193ba27/0x1a14000, compress 0x0/0x0/0x0, omap 0x1c596, meta 0x2bb3a6a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:13.623631+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:14.623813+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200297 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:15.624000+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.359195709s of 13.108038902s, submitted: 15
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:16.624190+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:17.624415+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193ba27/0x1a14000, compress 0x0/0x0/0x0, omap 0x1c66b, meta 0x2bb3995), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193ba27/0x1a14000, compress 0x0/0x0/0x0, omap 0x1c66b, meta 0x2bb3995), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:18.624573+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:19.624716+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200121 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:20.624875+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 10665984 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:21.625057+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 10665984 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:22.625212+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:23.625382+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb616000/0x0/0x4ffc00000, data 0x193bc27/0x1a16000, compress 0x0/0x0/0x0, omap 0x1cde8, meta 0x2bb3218), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:24.625530+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205053 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:25.625689+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.005226135s of 10.002261162s, submitted: 14
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:26.625861+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:27.626018+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:28.626179+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:29.626334+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb616000/0x0/0x4ffc00000, data 0x193bdbb/0x1a16000, compress 0x0/0x0/0x0, omap 0x1d70f, meta 0x2bb28f1), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205867 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb616000/0x0/0x4ffc00000, data 0x193bdbb/0x1a16000, compress 0x0/0x0/0x0, omap 0x1d70f, meta 0x2bb28f1), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:30.626561+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:31.626735+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:32.626891+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 10559488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:33.627062+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 10559488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:34.627205+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206603 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb615000/0x0/0x4ffc00000, data 0x193bf4f/0x1a17000, compress 0x0/0x0/0x0, omap 0x1de8c, meta 0x2bb2174), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 10534912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:35.627354+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 10534912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.561585426s of 10.002349854s, submitted: 22
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:36.628269+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb615000/0x0/0x4ffc00000, data 0x193bf4f/0x1a17000, compress 0x0/0x0/0x0, omap 0x1e199, meta 0x2bb1e67), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 10534912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:37.628401+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 10526720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb614000/0x0/0x4ffc00000, data 0x193c0b4/0x1a18000, compress 0x0/0x0/0x0, omap 0x1e227, meta 0x2bb1dd9), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:38.628566+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 10526720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:39.628709+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206715 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 10526720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:40.628924+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:41.629077+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:42.629308+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb613000/0x0/0x4ffc00000, data 0x193dd17/0x1a19000, compress 0x0/0x0/0x0, omap 0x1eb87, meta 0x2bb1479), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb613000/0x0/0x4ffc00000, data 0x193dd17/0x1a19000, compress 0x0/0x0/0x0, omap 0x1eb87, meta 0x2bb1479), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:43.629501+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:44.629687+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212201 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:45.629839+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f7c5/0x1a1b000, compress 0x0/0x0/0x0, omap 0x1f058, meta 0x2bb0fa8), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:46.630009+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.303206444s of 10.570754051s, submitted: 62
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f7c5/0x1a1b000, compress 0x0/0x0/0x0, omap 0x1f058, meta 0x2bb0fa8), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:47.630149+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:48.630361+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193f860/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1f202, meta 0x2bb0dfe), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:49.630515+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213893 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:50.630681+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:51.630823+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:52.630985+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:53.631171+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:54.631311+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f8fb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x1f4c8, meta 0x2bb0b38), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214721 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f8fb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x1f4c8, meta 0x2bb0b38), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:55.631475+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:56.631609+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:57.631775+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:58.631899+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.350849152s of 12.433691978s, submitted: 6
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:59.632007+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f8fb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x1f863, meta 0x2bb079d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214737 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:00.632138+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:01.632292+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:02.632457+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193f98f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1fbb7, meta 0x2bb0449), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:03.632660+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:04.632806+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214163 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:05.632975+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:06.633164+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:07.633312+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193f98f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1fcd3, meta 0x2bb032d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:08.633502+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.983986855s of 10.001843452s, submitted: 9
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:09.633695+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213987 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:10.633833+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193f98f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1ffe0, meta 0x2bb0020), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:11.633979+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:12.634130+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 10452992 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193fabe/0x1a1c000, compress 0x0/0x0/0x0, omap 0x200fc, meta 0x2baff04), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:13.634336+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:14.634494+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213987 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:15.634635+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 10444800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:16.634786+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:17.634933+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:18.635078+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fb32/0x1a1d000, compress 0x0/0x0/0x0, omap 0x20450, meta 0x2bafbb0), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:19.635255+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.504415512s of 10.521648407s, submitted: 9
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215695 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:20.635403+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 10428416 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:21.635585+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193faee/0x1a1d000, compress 0x0/0x0/0x0, omap 0x2075d, meta 0x2baf8a3), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 10428416 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:22.635740+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:23.635977+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193fcb6/0x1a1d000, compress 0x0/0x0/0x0, omap 0x20995, meta 0x2baf66b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:24.636131+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193fd51/0x1a1e000, compress 0x0/0x0/0x0, omap 0x20af8, meta 0x2baf508), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218185 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:25.636304+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:26.636453+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193fd51/0x1a1e000, compress 0x0/0x0/0x0, omap 0x20af8, meta 0x2baf508), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:27.636600+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:28.636716+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:29.636854+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.069396019s of 10.170839310s, submitted: 20
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217611 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:30.637030+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fd52/0x1a1d000, compress 0x0/0x0/0x0, omap 0x20eda, meta 0x2baf126), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:31.637263+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:32.637427+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fdb7/0x1a1d000, compress 0x0/0x0/0x0, omap 0x211e7, meta 0x2baee19), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:33.637605+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:34.637754+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218729 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:35.637905+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89210880 unmapped: 10403840 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:36.638052+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89210880 unmapped: 10403840 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:37.638213+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fe1c/0x1a1d000, compress 0x0/0x0/0x0, omap 0x21610, meta 0x2bae9f0), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:38.638396+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:39.638609+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193fe4b/0x1a1c000, compress 0x0/0x0/0x0, omap 0x217ba, meta 0x2bae846), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217995 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:40.638743+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:41.638992+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.992568970s of 12.027859688s, submitted: 18
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 10043392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:42.639197+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 10043392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:43.639377+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 10043392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:44.639561+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb5f7000/0x0/0x4ffc00000, data 0x19586a5/0x1a35000, compress 0x0/0x0/0x0, omap 0x21801, meta 0x2bae7ff), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223627 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 9969664 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:45.639689+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 7282688 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:46.640142+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0x1985ab0/0x1a62000, compress 0x0/0x0/0x0, omap 0x21964, meta 0x3d4e69c), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 7282688 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:47.640292+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92471296 unmapped: 7143424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:48.640424+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92528640 unmapped: 7086080 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:49.640563+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234575 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6799360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:50.640722+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 6848512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:51.640860+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3e9000/0x0/0x4ffc00000, data 0x19c5e65/0x1aa3000, compress 0x0/0x0/0x0, omap 0x22199, meta 0x3d4de67), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3e1000/0x0/0x4ffc00000, data 0x19cdc69/0x1aab000, compress 0x0/0x0/0x0, omap 0x22271, meta 0x3d4dd8f), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.162047386s of 10.287876129s, submitted: 58
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92839936 unmapped: 6774784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:52.640992+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 6479872 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:53.641202+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 6389760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3c0000/0x0/0x4ffc00000, data 0x19ed4c4/0x1acc000, compress 0x0/0x0/0x0, omap 0x22541, meta 0x3d4dabf), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:54.641345+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228761 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 6709248 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:55.641423+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3c0000/0x0/0x4ffc00000, data 0x19ed4c4/0x1acc000, compress 0x0/0x0/0x0, omap 0x22739, meta 0x3d4d8c7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 6610944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:56.641596+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 6316032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:57.641764+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93339648 unmapped: 6275072 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:58.641935+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 6266880 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:59.642159+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232545 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 6029312 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:00.642306+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa387000/0x0/0x4ffc00000, data 0x1a2844f/0x1b05000, compress 0x0/0x0/0x0, omap 0x22a96, meta 0x3d4d56a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94855168 unmapped: 4759552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:01.642466+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.685272217s of 10.002084732s, submitted: 83
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 4923392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:02.642601+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 5120000 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:03.642759+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 4947968 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:04.642880+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245549 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 4694016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:05.643003+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94797824 unmapped: 4816896 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:06.643165+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa305000/0x0/0x4ffc00000, data 0x1aa81ef/0x1b87000, compress 0x0/0x0/0x0, omap 0x235e8, meta 0x3d4ca18), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94797824 unmapped: 4816896 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:07.643309+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 4685824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa2fb000/0x0/0x4ffc00000, data 0x1ab291d/0x1b91000, compress 0x0/0x0/0x0, omap 0x238ae, meta 0x3d4c752), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:08.643498+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 139 handle_osd_map epochs [139,140], i have 140, src has [1,140]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 95059968 unmapped: 4554752 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:09.643640+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253243 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 95166464 unmapped: 4448256 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:10.643802+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa280000/0x0/0x4ffc00000, data 0x1b28cb3/0x1c0a000, compress 0x0/0x0/0x0, omap 0x2443b, meta 0x3d4bbc5), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 3039232 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:11.643946+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.631328583s of 10.002451897s, submitted: 112
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 3588096 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:12.644062+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 3457024 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:13.644277+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96247808 unmapped: 3366912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:14.644545+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa24d000/0x0/0x4ffc00000, data 0x1b5d7a7/0x1c3e000, compress 0x0/0x0/0x0, omap 0x249db, meta 0x3d4b625), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254393 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 3162112 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:15.644692+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 3063808 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:16.644840+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 3063808 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:17.644961+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 3276800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:18.645143+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 3276800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:19.645293+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa1ec000/0x0/0x4ffc00000, data 0x1bbf955/0x1ca0000, compress 0x0/0x0/0x0, omap 0x25173, meta 0x3d4ae8d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264037 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 2056192 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:20.645512+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 1835008 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:21.645664+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.661271095s of 10.002766609s, submitted: 80
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 1777664 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:22.645955+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 1687552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:23.646174+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98123776 unmapped: 2539520 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:24.646376+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268593 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa14e000/0x0/0x4ffc00000, data 0x1c5b085/0x1d3d000, compress 0x0/0x0/0x0, omap 0x25833, meta 0x3d4a7cd), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98238464 unmapped: 2424832 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:25.646645+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 2375680 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:26.646901+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 2170880 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:27.647090+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 2170880 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:28.647339+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 1843200 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:29.647505+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa137000/0x0/0x4ffc00000, data 0x1c75450/0x1d55000, compress 0x0/0x0/0x0, omap 0x25a20, meta 0x3d4a5e0), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa10b000/0x0/0x4ffc00000, data 0x1ca12c8/0x1d81000, compress 0x0/0x0/0x0, omap 0x25f3b, meta 0x3d4a0c5), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267149 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 589824 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:30.647738+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 589824 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:31.647919+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.730478287s of 10.002023697s, submitted: 71
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100179968 unmapped: 1531904 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:32.648147+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x1cf9eea/0x1ddb000, compress 0x0/0x0/0x0, omap 0x261c3, meta 0x3d49e3d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1712128 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:33.648307+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x1cf9eea/0x1ddb000, compress 0x0/0x0/0x0, omap 0x262e3, meta 0x3d49d1d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1712128 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:34.648505+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277587 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1515520 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:35.648649+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:36.648843+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1884160 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:37.649029+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1884160 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:38.649215+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1884160 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:39.649547+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 2023424 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa065000/0x0/0x4ffc00000, data 0x1d45c1e/0x1e27000, compress 0x0/0x0/0x0, omap 0x26ca5, meta 0x3d4935b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283849 data_alloc: 218103808 data_used: 7211
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:40.649757+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 2023424 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:41.649944+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 3072000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.822227478s of 10.002123833s, submitted: 99
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:42.650125+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa061000/0x0/0x4ffc00000, data 0x1d479b6/0x1e2b000, compress 0x0/0x0/0x0, omap 0x27427, meta 0x3d48bd9), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:43.650289+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:44.650564+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa061000/0x0/0x4ffc00000, data 0x1d47ae5/0x1e2b000, compress 0x0/0x0/0x0, omap 0x274b7, meta 0x3d48b49), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287513 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:45.650765+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:46.650971+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa05c000/0x0/0x4ffc00000, data 0x1d49580/0x1e2e000, compress 0x0/0x0/0x0, omap 0x27c2c, meta 0x3d483d4), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:47.651161+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:48.651331+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:49.651542+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1d4964a/0x1e2e000, compress 0x0/0x0/0x0, omap 0x27e6c, meta 0x3d48194), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287719 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:50.651687+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:51.651842+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:52.652141+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:53.652399+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.905422211s of 11.945888519s, submitted: 27
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1d4964a/0x1e2e000, compress 0x0/0x0/0x0, omap 0x27c0c, meta 0x3d483f4), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:54.652561+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294435 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:55.652685+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 3047424 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:56.661728+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 3047424 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:57.661863+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:58.662020+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fa058000/0x0/0x4ffc00000, data 0x1d4cd05/0x1e34000, compress 0x0/0x0/0x0, omap 0x285b1, meta 0x3d47a4f), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:59.662210+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293795 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:00.662368+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:01.662605+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:02.662751+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99729408 unmapped: 3031040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa054000/0x0/0x4ffc00000, data 0x1d4ea03/0x1e36000, compress 0x0/0x0/0x0, omap 0x28e3e, meta 0x3d471c2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:03.662897+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99729408 unmapped: 3031040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.821330070s of 10.003786087s, submitted: 83
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:04.663030+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299313 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:05.663196+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:06.663329+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa052000/0x0/0x4ffc00000, data 0x1d504cd/0x1e38000, compress 0x0/0x0/0x0, omap 0x2966d, meta 0x3d46993), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:07.663449+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:08.663617+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:09.663759+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 2998272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306283 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:10.663893+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 2981888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:11.663987+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 2981888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:12.664091+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 2981888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x1d53e14/0x1e3e000, compress 0x0/0x0/0x0, omap 0x2a2ce, meta 0x3d45d32), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:13.664258+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 2973696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.688855171s of 10.056839943s, submitted: 73
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:14.664367+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 2965504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306057 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:15.664543+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 2932736 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:16.664685+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 2924544 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:17.664822+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 2924544 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:18.665011+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 2924544 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa04a000/0x0/0x4ffc00000, data 0x1d55ba7/0x1e40000, compress 0x0/0x0/0x0, omap 0x2ab5c, meta 0x3d454a4), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:19.665152+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1892352 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311575 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:20.665266+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1892352 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:21.665385+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:22.665555+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:23.665717+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa049000/0x0/0x4ffc00000, data 0x1d576b7/0x1e43000, compress 0x0/0x0/0x0, omap 0x2b14e, meta 0x3d44eb2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:24.665884+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:25.666062+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310999 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:26.666222+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.768828392s of 13.002545357s, submitted: 53
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:27.666691+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:28.666844+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa049000/0x0/0x4ffc00000, data 0x1d576b7/0x1e43000, compress 0x0/0x0/0x0, omap 0x2b195, meta 0x3d44e6b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:29.667219+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:30.667532+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315337 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:31.667782+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:32.667970+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:33.668680+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa044000/0x0/0x4ffc00000, data 0x1d59284/0x1e47000, compress 0x0/0x0/0x0, omap 0x2b785, meta 0x3d4487b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:34.668855+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 1859584 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:35.669574+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315321 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 1859584 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:36.669998+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.965646744s of 10.002288818s, submitted: 29
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:37.670401+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:38.670901+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa044000/0x0/0x4ffc00000, data 0x1d592e8/0x1e47000, compress 0x0/0x0/0x0, omap 0x2bcca, meta 0x3d44336), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:39.671068+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:40.671240+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316439 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:41.671410+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:42.671586+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d592e6/0x1e47000, compress 0x0/0x0/0x0, omap 0x2bfd7, meta 0x3d44029), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:43.672003+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:44.672374+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:45.672617+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315689 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:46.673061+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa046000/0x0/0x4ffc00000, data 0x1d59285/0x1e46000, compress 0x0/0x0/0x0, omap 0x2c447, meta 0x3d43bb9), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:47.673311+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.925606728s of 11.002860069s, submitted: 14
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa046000/0x0/0x4ffc00000, data 0x1d59285/0x1e46000, compress 0x0/0x0/0x0, omap 0x2c447, meta 0x3d43bb9), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:48.673642+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:49.673898+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:50.674361+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315545 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:51.674494+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:52.674731+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:53.675007+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d592ea/0x1e47000, compress 0x0/0x0/0x0, omap 0x2ca1a, meta 0x3d435e6), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:54.675247+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 1818624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:55.675449+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315929 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 1818624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:56.675646+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 1818624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d592ea/0x1e47000, compress 0x0/0x0/0x0, omap 0x2cce0, meta 0x3d43320), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:57.675846+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.980464935s of 10.001944542s, submitted: 11
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:58.676036+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2807 syncs, 3.71 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3259 writes, 9856 keys, 3259 commit groups, 1.0 writes per commit group, ingest: 8.44 MB, 0.01 MB/s
                                           Interval WAL: 3259 writes, 1412 syncs, 2.31 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:59.676161+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:00.676288+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315801 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:01.676446+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 1802240 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d594e3/0x1e47000, compress 0x0/0x0/0x0, omap 0x2d07b, meta 0x3d42f85), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d594e3/0x1e47000, compress 0x0/0x0/0x0, omap 0x2d07b, meta 0x3d42f85), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:02.676625+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 1802240 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:03.676828+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 1802240 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:04.676992+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 1794048 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:05.677167+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315801 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:06.677283+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:07.677417+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.627371788s of 10.002052307s, submitted: 15
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d59677/0x1e47000, compress 0x0/0x0/0x0, omap 0x2d5c0, meta 0x3d42a40), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:08.677585+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:09.677747+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:10.677948+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315817 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:11.678123+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559005f52800 session 0x559004ecc000
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559007747800
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 1646592 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc ms_handle_reset ms_handle_reset con 0x5590067fa000
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: get_auth_request con 0x5590067fbc00 auth_method 0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_configure stats_period=5
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:12.678267+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 1662976 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559009534400 session 0x559007202a80
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5590091bdc00
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559007746000 session 0x559008c45180
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008a1cc00
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:13.678428+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa043000/0x0/0x4ffc00000, data 0x1d5980a/0x1e48000, compress 0x0/0x0/0x0, omap 0x2da77, meta 0x3d42589), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 1531904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:14.678557+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 1531904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:15.678705+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319383 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 1523712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:16.678827+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 2424832 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:17.678923+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x1d83259/0x1e72000, compress 0x0/0x0/0x0, omap 0x2dabe, meta 0x3d42542), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 2424832 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:18.679051+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101523456 unmapped: 2285568 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:19.679222+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101523456 unmapped: 2285568 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.993534088s of 11.781046867s, submitted: 23
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:20.679388+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330843 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1097728 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:21.679519+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102858752 unmapped: 950272 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:22.679683+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102367232 unmapped: 1441792 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:23.679847+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9fd7000/0x0/0x4ffc00000, data 0x1dc6753/0x1eb5000, compress 0x0/0x0/0x0, omap 0x2dcaf, meta 0x3d42351), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 1359872 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:24.679973+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 1359872 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9fac000/0x0/0x4ffc00000, data 0x1df0ce0/0x1ee0000, compress 0x0/0x0/0x0, omap 0x2ddcb, meta 0x3d42235), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:25.680185+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326003 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 1204224 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:26.680338+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102793216 unmapped: 1015808 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:27.680480+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9f57000/0x0/0x4ffc00000, data 0x1e467ac/0x1f35000, compress 0x0/0x0/0x0, omap 0x2e166, meta 0x3d41e9a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 103964672 unmapped: 1941504 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9f57000/0x0/0x4ffc00000, data 0x1e467ac/0x1f35000, compress 0x0/0x0/0x0, omap 0x2e166, meta 0x3d41e9a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:28.680604+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 103964672 unmapped: 1941504 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:29.680815+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 1703936 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.055461884s of 10.391435623s, submitted: 58
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:30.680981+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330561 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 1556480 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x1e57622/0x1f46000, compress 0x0/0x0/0x0, omap 0x2e166, meta 0x3d41e9a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:31.681830+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 1556480 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:32.682437+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 1556480 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559009277800
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:33.682852+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9ef1000/0x0/0x4ffc00000, data 0x1eab017/0x1f9b000, compress 0x0/0x0/0x0, omap 0x2e1f4, meta 0x3d41e0c), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104546304 unmapped: 1359872 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:34.683022+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 950272 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 14
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:35.683386+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351197 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 794624 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e8f000/0x0/0x4ffc00000, data 0x1f0c618/0x1ffd000, compress 0x0/0x0/0x0, omap 0x2e61d, meta 0x3d419e3), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:36.683926+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 1359872 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:37.684084+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e64000/0x0/0x4ffc00000, data 0x1f388bd/0x2028000, compress 0x0/0x0/0x0, omap 0x2e7c7, meta 0x3d41839), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 1351680 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:38.684582+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 1351680 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:39.684979+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e64000/0x0/0x4ffc00000, data 0x1f388bd/0x2028000, compress 0x0/0x0/0x0, omap 0x2e7c7, meta 0x3d41839), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 1163264 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.855733871s of 10.000440598s, submitted: 73
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:40.685434+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343261 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 3227648 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:41.685810+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104988672 unmapped: 3014656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:42.686006+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 2678784 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:43.686422+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e16000/0x0/0x4ffc00000, data 0x1f85c3e/0x2076000, compress 0x0/0x0/0x0, omap 0x2e9b8, meta 0x3d41648), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 2678784 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e16000/0x0/0x4ffc00000, data 0x1f85c3e/0x2076000, compress 0x0/0x0/0x0, omap 0x2e9b8, meta 0x3d41648), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:44.686789+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 2678784 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:45.687017+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348665 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 2490368 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:46.687238+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 2482176 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:47.687414+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 3538944 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9daa000/0x0/0x4ffc00000, data 0x1ff2a65/0x20e2000, compress 0x0/0x0/0x0, omap 0x2ec7e, meta 0x3d41382), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:48.687576+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 3268608 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:49.687707+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 3268608 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.970839500s of 10.000647545s, submitted: 52
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:50.687897+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350107 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105553920 unmapped: 3497984 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9da6000/0x0/0x4ffc00000, data 0x1ff6b45/0x20e6000, compress 0x0/0x0/0x0, omap 0x2ede1, meta 0x3d4121f), peers [0,2] op hist [0,0,0,0,0,1])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:51.688136+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 3391488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:52.688285+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 3391488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:53.688617+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 3391488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9d7a000/0x0/0x4ffc00000, data 0x202215b/0x2112000, compress 0x0/0x0/0x0, omap 0x2ede1, meta 0x3d4121f), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:54.688932+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105857024 unmapped: 3194880 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:55.689158+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361775 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105889792 unmapped: 3162112 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9d25000/0x0/0x4ffc00000, data 0x2075e62/0x2167000, compress 0x0/0x0/0x0, omap 0x2f060, meta 0x3d40fa0), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:56.689298+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 3121152 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:57.689477+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 2744320 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:58.689746+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 2523136 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:59.689984+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 2523136 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.027631760s of 10.000502586s, submitted: 46
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:00.690285+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361939 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3014656 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:01.690548+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9cce000/0x0/0x4ffc00000, data 0x20cdd17/0x21be000, compress 0x0/0x0/0x0, omap 0x2f3fb, meta 0x3d40c05), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 2826240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:02.690738+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 2809856 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:03.690911+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 2433024 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:04.691163+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9c7d000/0x0/0x4ffc00000, data 0x211dc20/0x220f000, compress 0x0/0x0/0x0, omap 0x2f824, meta 0x3d407dc), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 2424832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:05.691320+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9c6f000/0x0/0x4ffc00000, data 0x212a9c0/0x221c000, compress 0x0/0x0/0x0, omap 0x2f8f9, meta 0x3d40707), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369473 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 2424832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:06.691495+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 3186688 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:07.691780+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 3112960 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:08.692001+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 1892352 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:09.692144+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9c12000/0x0/0x4ffc00000, data 0x218ace6/0x227a000, compress 0x0/0x0/0x0, omap 0x2faa3, meta 0x3d4055d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 1449984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.208406448s of 10.001956940s, submitted: 71
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:10.692281+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376241 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 1368064 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:11.692418+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9bad000/0x0/0x4ffc00000, data 0x21ef2fe/0x22df000, compress 0x0/0x0/0x0, omap 0x2fe3e, meta 0x3d401c2), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108797952 unmapped: 2351104 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:12.692607+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 109215744 unmapped: 1933312 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:13.692825+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 2211840 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:14.692998+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 2211840 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:15.693165+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559008765000 session 0x559008126540
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x55900887c400
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379839 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 109101056 unmapped: 2048000 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:16.693354+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 1859584 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:17.693499+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9b3b000/0x0/0x4ffc00000, data 0x22618f6/0x2351000, compress 0x0/0x0/0x0, omap 0x30383, meta 0x3d3fc7d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110370816 unmapped: 778240 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:18.693624+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110395392 unmapped: 753664 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:19.693763+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 557056 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9b0f000/0x0/0x4ffc00000, data 0x228d478/0x237d000, compress 0x0/0x0/0x0, omap 0x3049f, meta 0x3d3fb61), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.538078308s of 10.000422478s, submitted: 61
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:20.693946+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384499 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 1605632 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9af0000/0x0/0x4ffc00000, data 0x22ac7b5/0x239c000, compress 0x0/0x0/0x0, omap 0x306d7, meta 0x3d3f929), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:21.694135+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 1597440 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:22.694310+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 1556480 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:23.694664+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9ad6000/0x0/0x4ffc00000, data 0x22c679d/0x23b6000, compress 0x0/0x0/0x0, omap 0x3083a, meta 0x3d3f7c6), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 1515520 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:24.694825+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 1597440 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:25.695337+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386235 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110469120 unmapped: 2777088 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:26.695538+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9a75000/0x0/0x4ffc00000, data 0x2327bf3/0x2417000, compress 0x0/0x0/0x0, omap 0x30bd5, meta 0x3d3f42b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1638400 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:27.695733+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111968256 unmapped: 1277952 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:28.695951+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 1515520 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:29.696180+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 1515520 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.300408363s of 10.016177177s, submitted: 165
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:30.696355+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393819 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 2449408 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:31.696501+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f99c3000/0x0/0x4ffc00000, data 0x23d8d96/0x24c9000, compress 0x0/0x0/0x0, omap 0x31161, meta 0x3d3ee9f), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 2056192 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:32.696718+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 2056192 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:33.696906+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f99c3000/0x0/0x4ffc00000, data 0x23d8d96/0x24c9000, compress 0x0/0x0/0x0, omap 0x3127d, meta 0x3d3ed83), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111484928 unmapped: 2809856 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:34.697080+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 2646016 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:35.697338+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399779 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 1441792 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:36.697553+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112418816 unmapped: 2924544 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:37.697762+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f9952000/0x0/0x4ffc00000, data 0x244b70c/0x253a000, compress 0x0/0x0/0x0, omap 0x3191a, meta 0x3d3e6e6), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f9917000/0x0/0x4ffc00000, data 0x24860cb/0x2575000, compress 0x0/0x0/0x0, omap 0x31a36, meta 0x3d3e5ca), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112418816 unmapped: 2924544 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:38.697999+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112418816 unmapped: 2924544 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:39.698221+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112656384 unmapped: 2686976 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:40.698382+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f9917000/0x0/0x4ffc00000, data 0x24860cb/0x2575000, compress 0x0/0x0/0x0, omap 0x31b0b, meta 0x3d3e4f5), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.177977562s of 10.428889275s, submitted: 88
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405161 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 2506752 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:41.698520+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x24c8603/0x25b7000, compress 0x0/0x0/0x0, omap 0x31b0b, meta 0x3d3e4f5), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 2498560 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:42.698683+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 2220032 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:43.698925+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 2498560 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:44.699062+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 2449408 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:45.699237+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x250d15e/0x25fe000, compress 0x0/0x0/0x0, omap 0x31e52, meta 0x3d3e1ae), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413183 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x250d15e/0x25fe000, compress 0x0/0x0/0x0, omap 0x31fb5, meta 0x3d3e04b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 2277376 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:46.699387+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 2269184 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:47.699513+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114368512 unmapped: 974848 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:48.699664+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 2269184 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:49.699814+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9849000/0x0/0x4ffc00000, data 0x2550ee0/0x2643000, compress 0x0/0x0/0x0, omap 0x3227b, meta 0x3d3dd85), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 2260992 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:50.699975+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413527 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.277583122s of 10.388940811s, submitted: 61
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 3153920 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:51.700145+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f97f7000/0x0/0x4ffc00000, data 0x25a30f1/0x2695000, compress 0x0/0x0/0x0, omap 0x32397, meta 0x3d3dc69), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 2990080 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:52.700302+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 2990080 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:53.700480+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 2990080 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:54.700696+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 3473408 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:55.700864+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f97c7000/0x0/0x4ffc00000, data 0x25d36e4/0x26c5000, compress 0x0/0x0/0x0, omap 0x32732, meta 0x3d3d8ce), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424773 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 3407872 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:56.701025+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 3399680 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:57.701252+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 154 ms_handle_reset con 0x559009277800 session 0x559006a20e00
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 1785856 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:58.701443+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 1785856 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:59.701622+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 15
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x26478b5/0x2739000, compress 0x0/0x0/0x0, omap 0x32d93, meta 0x3d3d26d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 1679360 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:00.701794+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426763 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.372922897s of 10.013011932s, submitted: 276
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f9745000/0x0/0x4ffc00000, data 0x26522a5/0x2745000, compress 0x0/0x0/0x0, omap 0x330dd, meta 0x3d3cf23), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 3465216 heap: 118489088 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:01.701941+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115277824 unmapped: 3211264 heap: 118489088 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:02.702126+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 2023424 heap: 119537664 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:03.702324+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 1605632 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:04.702606+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 1605632 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:05.702814+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f7384000/0x0/0x4ffc00000, data 0x26d1331/0x27c6000, compress 0x0/0x0/0x0, omap 0x336ef, meta 0x607c911), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434585 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 1662976 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:06.703007+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 2736128 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:07.703158+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 2736128 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:08.703434+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 2736128 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:09.703632+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f7356000/0x0/0x4ffc00000, data 0x2701ed1/0x27f6000, compress 0x0/0x0/0x0, omap 0x3380b, meta 0x607c7f5), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118259712 unmapped: 2326528 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:10.703780+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1441097 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.071870804s of 10.087114334s, submitted: 71
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 2260992 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:11.703985+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 2260992 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:12.704129+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118448128 unmapped: 2138112 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:13.704270+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118587392 unmapped: 1998848 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:14.704441+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f72db000/0x0/0x4ffc00000, data 0x277bc0b/0x2871000, compress 0x0/0x0/0x0, omap 0x34016, meta 0x607bfea), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118538240 unmapped: 3096576 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:15.704571+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443461 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:16.704728+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:17.704869+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:18.705040+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:19.705340+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f72b5000/0x0/0x4ffc00000, data 0x279f910/0x2895000, compress 0x0/0x0/0x0, omap 0x3447c, meta 0x607bb84), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 3948544 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:20.705515+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f72a1000/0x0/0x4ffc00000, data 0x27b5ea9/0x28ab000, compress 0x0/0x0/0x0, omap 0x3447c, meta 0x607bb84), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445049 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 3948544 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:21.705678+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 3858432 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:22.705796+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 3858432 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:23.705960+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f72a1000/0x0/0x4ffc00000, data 0x27b5ea9/0x28ab000, compress 0x0/0x0/0x0, omap 0x3447c, meta 0x607bb84), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.240313530s of 12.346166611s, submitted: 51
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 3768320 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:24.706085+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 3735552 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:25.706276+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f7284000/0x0/0x4ffc00000, data 0x27cf715/0x28c6000, compress 0x0/0x0/0x0, omap 0x348ff, meta 0x607b701), peers [0,2] op hist [0,0,0,0,0,1])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1449311 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 2482176 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:26.706441+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f7268000/0x0/0x4ffc00000, data 0x27ecf3f/0x28e4000, compress 0x0/0x0/0x0, omap 0x34a62, meta 0x607b59e), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 2564096 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:27.706610+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 2416640 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:28.706819+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f724e000/0x0/0x4ffc00000, data 0x2806ffd/0x28fe000, compress 0x0/0x0/0x0, omap 0x34c0c, meta 0x607b3f4), peers [0,2] op hist [0,0,0,0,1])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 2285568 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:29.707033+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 2269184 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:30.707187+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452615 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 2121728 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:31.707305+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 2990080 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:32.707451+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 2990080 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:33.707638+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.959165096s of 10.444355011s, submitted: 96
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119857152 unmapped: 2826240 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:34.707791+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f71d7000/0x0/0x4ffc00000, data 0x287d6fd/0x2975000, compress 0x0/0x0/0x0, omap 0x35035, meta 0x607afcb), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119889920 unmapped: 2793472 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:35.707982+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 158 handle_osd_map epochs [158,159], i have 159, src has [1,159]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463501 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 2777088 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:36.708251+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120119296 unmapped: 2564096 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:37.708397+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7188000/0x0/0x4ffc00000, data 0x28cc40f/0x29c4000, compress 0x0/0x0/0x0, omap 0x3568c, meta 0x607a974), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120119296 unmapped: 2564096 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:38.708535+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 2555904 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:39.708659+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7183000/0x0/0x4ffc00000, data 0x28d1600/0x29c9000, compress 0x0/0x0/0x0, omap 0x3571a, meta 0x607a8e6), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 2662400 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:40.708849+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464155 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:41.709768+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 2605056 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:42.710443+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 2605056 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:43.710712+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f714d000/0x0/0x4ffc00000, data 0x2907a5f/0x29ff000, compress 0x0/0x0/0x0, omap 0x3587d, meta 0x607a783), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f714d000/0x0/0x4ffc00000, data 0x2907a5f/0x29ff000, compress 0x0/0x0/0x0, omap 0x3587d, meta 0x607a783), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:44.711124+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.447093964s of 11.187532425s, submitted: 79
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f714d000/0x0/0x4ffc00000, data 0x2907a5f/0x29ff000, compress 0x0/0x0/0x0, omap 0x3587d, meta 0x607a783), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:45.711268+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464993 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:46.711443+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:47.711907+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:48.712361+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7148000/0x0/0x4ffc00000, data 0x29094de/0x2a02000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:49.712561+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7148000/0x0/0x4ffc00000, data 0x29094de/0x2a02000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:50.712857+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:51.713199+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:52.713491+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:53.713724+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:54.713968+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:55.714175+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:56.714344+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:57.714516+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:58.714659+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:59.714831+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:00.714979+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:01.715128+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:02.715276+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:03.715535+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:04.715764+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:05.715926+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:06.716262+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:07.716554+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:08.716783+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:09.716995+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:10.717180+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:11.717362+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:12.717514+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:13.717635+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:14.717761+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:15.717897+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:16.718122+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:17.718233+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:18.718433+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:19.718568+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:20.718720+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:21.718885+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:22.719013+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 2433024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:23.719175+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 2433024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.244323730s of 39.156387329s, submitted: 15
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:24.719316+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:25.719467+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:26.719612+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466141 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:27.719775+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f711f000/0x0/0x4ffc00000, data 0x29335ed/0x2a2d000, compress 0x0/0x0/0x0, omap 0x35e49, meta 0x607a1b7), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f711f000/0x0/0x4ffc00000, data 0x29335ed/0x2a2d000, compress 0x0/0x0/0x0, omap 0x35e49, meta 0x607a1b7), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:28.719940+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 2408448 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:29.720237+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 2408448 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:30.720461+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1392640 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f70eb000/0x0/0x4ffc00000, data 0x2968543/0x2a61000, compress 0x0/0x0/0x0, omap 0x35ed7, meta 0x607a129), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:31.720593+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471213 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 1409024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x29685de/0x2a62000, compress 0x0/0x0/0x0, omap 0x35ed7, meta 0x607a129), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:32.720742+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 1409024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:33.720920+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 1409024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:34.721058+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1392640 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.987424850s of 10.797435760s, submitted: 21
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:35.721181+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1392640 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 160 handle_osd_map epochs [160,161], i have 161, src has [1,161]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:36.721352+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472851 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 1384448 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:37.721489+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x296a2dc/0x2a64000, compress 0x0/0x0/0x0, omap 0x36720, meta 0x60798e0), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:38.721623+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:39.721751+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:40.721951+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:41.722198+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471541 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x296a2dc/0x2a64000, compress 0x0/0x0/0x0, omap 0x36a2d, meta 0x60795d3), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:42.722336+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 1368064 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:43.722503+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 1368064 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:44.723170+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x296a341/0x2a64000, compress 0x0/0x0/0x0, omap 0x36d3a, meta 0x60792c6), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 1368064 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:45.723628+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 1359872 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.852377892s of 10.690566063s, submitted: 53
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:46.724071+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475847 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:47.724495+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:48.724809+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e4000/0x0/0x4ffc00000, data 0x296bec0/0x2a68000, compress 0x0/0x0/0x0, omap 0x3722f, meta 0x6078dd1), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:49.725006+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:50.725139+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e4000/0x0/0x4ffc00000, data 0x296bec0/0x2a68000, compress 0x0/0x0/0x0, omap 0x3722f, meta 0x6078dd1), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:51.725304+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475847 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:52.725585+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559008765000
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:53.725977+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 16
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 1261568 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5590091bb000
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e3000/0x0/0x4ffc00000, data 0x296bfd5/0x2a69000, compress 0x0/0x0/0x0, omap 0x37583, meta 0x6078a7d), peers [0,2] op hist [0,1])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:54.726323+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 1114112 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:55.726647+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 17
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 1114112 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.184496880s of 10.089957237s, submitted: 11
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:56.726951+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482063 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 1114112 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e1000/0x0/0x4ffc00000, data 0x296c1d9/0x2a6b000, compress 0x0/0x0/0x0, omap 0x37802, meta 0x60787fe), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:57.727163+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:58.727503+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:59.727732+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:00.727912+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:01.728078+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:02.728340+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:03.728614+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:04.728790+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:05.728955+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:06.729250+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:07.729568+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:08.729773+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:09.729996+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:10.730165+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:11.730286+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:12.730488+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:13.730748+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:14.730898+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:15.731202+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:16.731363+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:17.731529+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:18.731652+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:19.731862+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:20.731991+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:21.732174+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:22.732286+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:23.732430+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:24.732614+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:25.732749+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:26.732852+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:27.732918+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:28.733054+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:29.733249+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:30.733449+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:31.733641+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:32.733817+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:33.734042+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:34.734181+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:35.734282+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:36.734414+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:37.734550+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:38.734717+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:39.734883+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:40.735010+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:41.735152+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:42.735527+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:43.735765+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:44.735891+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:45.736544+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:46.736807+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:47.736993+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:48.737174+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:49.737391+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:50.737549+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:51.737692+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:52.738785+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.612331390s of 57.233253479s, submitted: 8
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:53.740541+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 1056768 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:54.742006+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 1056768 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:55.742747+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 1056768 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:56.743079+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481585 data_alloc: 218103808 data_used: 7996
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121634816 unmapped: 1048576 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x37965, meta 0x607869b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:57.743976+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 ms_handle_reset con 0x559008765000 session 0x5590095501c0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 1277952 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:58.744290+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x37965, meta 0x607869b), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 ms_handle_reset con 0x5590091bb000 session 0x559009502700
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 1277952 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:59.744929+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 1277952 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 18
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:00.745211+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 1269760 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:01.745710+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481569 data_alloc: 218103808 data_used: 8151
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:02.746217+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:03.746612+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c2c1/0x2a6a000, compress 0x0/0x0/0x0, omap 0x37cb9, meta 0x6078347), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.706017494s of 10.825467110s, submitted: 193
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:04.746907+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:05.747473+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:06.747599+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484091 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 1130496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:07.747828+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:08.747990+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:09.748185+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296de89/0x2a6a000, compress 0x0/0x0/0x0, omap 0x3839f, meta 0x6077c61), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:10.748446+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296de89/0x2a6a000, compress 0x0/0x0/0x0, omap 0x3839f, meta 0x6077c61), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:11.748677+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482925 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:12.748846+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296de89/0x2a6a000, compress 0x0/0x0/0x0, omap 0x3839f, meta 0x6077c61), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:13.749055+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:14.749210+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.024420738s of 11.033547401s, submitted: 49
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:15.749372+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:16.749578+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486275 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:17.749794+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296f908/0x2a6d000, compress 0x0/0x0/0x0, omap 0x386a3, meta 0x607795d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:18.749944+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:19.750151+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:20.750349+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296f9a3/0x2a6e000, compress 0x0/0x0/0x0, omap 0x38806, meta 0x60777fa), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:21.750470+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296f9a3/0x2a6e000, compress 0x0/0x0/0x0, omap 0x38806, meta 0x60777fa), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1487247 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:22.750594+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:23.750741+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:24.750885+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:25.751010+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.133322716s of 11.148038864s, submitted: 15
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:26.751177+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296f9a3/0x2a6e000, compress 0x0/0x0/0x0, omap 0x38894, meta 0x607776c), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1487247 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:27.751350+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:28.751496+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:29.751673+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:30.751827+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fa3e/0x2a6f000, compress 0x0/0x0/0x0, omap 0x38acc, meta 0x6077534), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:31.752056+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488795 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:32.752268+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fa3e/0x2a6f000, compress 0x0/0x0/0x0, omap 0x38ba1, meta 0x607745f), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:33.752568+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:34.752779+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:35.752971+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:36.753167+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488221 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.623150826s of 11.003303528s, submitted: 12
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:37.753302+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296fa6d/0x2a6e000, compress 0x0/0x0/0x0, omap 0x39058, meta 0x6076fa8), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:38.753465+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296fa6d/0x2a6e000, compress 0x0/0x0/0x0, omap 0x39058, meta 0x6076fa8), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296fa6d/0x2a6e000, compress 0x0/0x0/0x0, omap 0x39058, meta 0x6076fa8), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:39.753617+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:40.753793+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:41.753945+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488397 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:42.754156+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:43.754343+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:44.754518+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dc000/0x0/0x4ffc00000, data 0x296fbd2/0x2a70000, compress 0x0/0x0/0x0, omap 0x39556, meta 0x6076aaa), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:45.754708+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:46.754882+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490089 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.575130463s of 10.002009392s, submitted: 8
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:47.755036+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:48.755210+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fc01/0x2a6f000, compress 0x0/0x0/0x0, omap 0x399c6, meta 0x607663a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:49.755410+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:50.755620+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:51.755792+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489051 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:52.755949+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:53.756212+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fc01/0x2a6f000, compress 0x0/0x0/0x0, omap 0x39cd3, meta 0x607632d), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:54.756372+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:55.756533+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fc01/0x2a6f000, compress 0x0/0x0/0x0, omap 0x39f99, meta 0x6076067), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:56.756760+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489211 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.819437981s of 10.002734184s, submitted: 15
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:57.756929+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a143, meta 0x6075ebd), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a2a6, meta 0x6075d5a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:58.757087+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a2a6, meta 0x6075d5a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:59.757259+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:00.757385+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:01.757497+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a4de, meta 0x6075b22), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489227 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:02.757606+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:03.757811+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:04.757972+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 2203648 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:05.758145+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 2203648 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:06.758358+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490887 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.297217369s of 10.052202225s, submitted: 12
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:07.758608+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fe5f/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a907, meta 0x60756f9), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:08.758768+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:09.758921+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fe5f/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3abcd, meta 0x6075433), peers [0,2] op hist [0,1])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:10.759130+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:11.759264+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 2179072 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1492913 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:12.759429+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:13.759621+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f70d9000/0x0/0x4ffc00000, data 0x2971a93/0x2a71000, compress 0x0/0x0/0x0, omap 0x3afa6, meta 0x607505a), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:14.759755+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 165 handle_osd_map epochs [165,166], i have 166, src has [1,166]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:15.759964+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:16.760162+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:17.760394+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:18.760584+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:19.760721+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:20.760918+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:21.761078+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _renew_subs
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:22.761246+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:23.761461+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:24.761626+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:25.761918+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:26.762471+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:27.762767+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:28.764918+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:29.765669+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:30.766308+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:31.767226+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:32.767875+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:33.768189+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:34.768523+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:35.769499+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:36.769783+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:37.770204+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:38.770495+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:39.771220+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:40.772127+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:41.772597+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:42.773004+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:43.773214+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:44.773348+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:45.773859+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:46.774199+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:47.774418+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:48.774683+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:49.774851+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:50.775064+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:51.775267+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:52.775461+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:53.775650+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:54.775808+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:55.776008+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:56.776191+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:57.776324+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:58.776452+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:59.776624+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:00.776747+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:01.776917+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:02.777070+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:03.777312+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:04.777497+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:05.777680+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:06.777849+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:07.777995+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:08.778167+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:09.778299+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:10.778429+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:11.778582+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:12.778706+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:13.778892+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:14.779060+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:15.779215+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:16.779348+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:17.779502+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:18.779641+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:19.779772+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:20.779913+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:21.780053+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:22.780341+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:23.780611+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:24.780769+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:25.780895+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:26.781317+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:27.781512+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:28.781687+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'config show' '{prefix=config show}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 19
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:29.781873+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 82.034767151s of 82.260131836s, submitted: 50
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 ms_handle_reset con 0x559008cf0400 session 0x559008dbb880
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 3686400 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:30.782002+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 3604480 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'log dump' '{prefix=log dump}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:31.782130+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122232832 unmapped: 14639104 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'perf dump' '{prefix=perf dump}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Got map version 20
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'perf schema' '{prefix=perf schema}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:32.782276+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:33.782436+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:34.782584+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:35.782716+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:36.782862+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:37.782995+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:38.783168+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:39.783283+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:40.783496+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:41.783659+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:42.783867+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:43.784075+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:44.784340+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:45.784530+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:46.784712+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:47.784917+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:48.785111+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:49.785251+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:50.785413+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:51.785578+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:52.785740+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:53.785911+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:54.786050+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:55.786235+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:56.786424+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:57.786560+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:58.786683+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:59.786791+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:00.786941+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:01.787178+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:02.787958+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:03.788166+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:04.788511+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:05.788639+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:06.788838+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:07.789330+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:08.789506+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:09.789703+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:10.790133+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:11.790265+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:12.790430+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:13.790618+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:14.790926+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:15.791229+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:16.791430+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:17.791668+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:18.791913+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:19.792053+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:20.792291+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:21.792459+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:22.792696+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:23.792933+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:24.793213+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:25.793486+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:26.793751+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:27.794037+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:28.794214+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:29.794514+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:30.794753+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:31.794888+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:32.795068+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:33.795304+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:34.795481+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:35.795680+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:36.795876+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:37.796068+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:38.796325+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:39.796460+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:40.796654+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:41.796911+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:42.797149+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:43.797364+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:44.797560+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:45.797898+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:46.798237+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:47.798478+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:48.798609+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:49.798753+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:50.798882+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:51.799007+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:52.799159+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:53.799322+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:54.799474+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:55.799639+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:56.799808+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:57.799964+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:58.800128+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:59.800278+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:00.800469+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:01.800608+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 14368768 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:02.800789+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:03.800998+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:04.801152+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:05.801464+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:06.801746+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:07.802019+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:08.802197+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:09.802749+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:10.803277+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:11.803482+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:12.803927+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:13.804244+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:14.804735+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:15.805067+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:16.805256+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:17.805820+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:18.806486+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:19.806680+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:20.806807+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:21.807060+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:22.807178+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122511360 unmapped: 14360576 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:23.807404+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122519552 unmapped: 14352384 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:24.807528+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122519552 unmapped: 14352384 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:25.807687+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122519552 unmapped: 14352384 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:26.807962+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122519552 unmapped: 14352384 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:27.808148+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122519552 unmapped: 14352384 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:28.808370+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122519552 unmapped: 14352384 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:29.808528+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122519552 unmapped: 14352384 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:30.808700+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122519552 unmapped: 14352384 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:31.808895+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 14344192 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:32.809165+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 14344192 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:33.809492+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 14344192 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:34.809715+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 14344192 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:35.809870+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 14344192 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:36.810075+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 14344192 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:37.810221+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 14344192 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:38.810381+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122527744 unmapped: 14344192 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:39.810645+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:40.811162+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:41.811343+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:42.811565+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:43.811952+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:44.812178+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:45.812308+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:46.812512+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:47.812718+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:48.812965+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:49.813189+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:50.813397+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:51.813572+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:52.813736+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:53.813938+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:54.814206+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:55.814370+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:56.814534+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:57.814757+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:58.814928+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122535936 unmapped: 14336000 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 4009 syncs, 3.38 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3149 writes, 9876 keys, 3149 commit groups, 1.0 writes per commit group, ingest: 14.97 MB, 0.02 MB/s
                                           Interval WAL: 3149 writes, 1202 syncs, 2.62 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:59.815190+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:00.815373+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:01.815616+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:02.815813+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:03.816052+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:04.816213+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:05.816379+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:06.816585+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:07.816789+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:08.816995+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:09.817181+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:10.817401+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:11.819327+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:12.819755+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:13.820786+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:14.821353+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:15.821558+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:16.821920+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:17.822935+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:18.823355+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 14327808 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:19.823560+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:20.823897+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:21.824661+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:22.824980+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:23.825256+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:24.825449+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:25.826163+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:26.826357+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:27.826519+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:28.826656+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:29.826925+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122552320 unmapped: 14319616 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:30.827072+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122560512 unmapped: 14311424 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:31.827247+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122560512 unmapped: 14311424 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:32.827432+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122560512 unmapped: 14311424 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:33.827602+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122560512 unmapped: 14311424 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:34.827793+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122560512 unmapped: 14311424 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:35.827995+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:36.828225+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:37.828347+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:38.828616+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:39.828840+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:40.828998+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:41.829333+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:42.835323+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:43.835504+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:44.835614+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:45.835787+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:46.835963+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:47.836129+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:48.836305+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:49.836485+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:50.836661+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 14303232 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:51.836862+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:52.837004+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:53.837175+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:54.837314+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:55.837496+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:56.837636+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:57.837833+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:58.837953+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:59.838088+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:00.838268+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:01.838397+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:02.838500+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:03.838690+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:04.838868+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:05.838973+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:06.839143+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:07.839320+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:08.839478+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:09.839639+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 14295040 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:10.839781+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:11.839932+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:12.840073+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:13.840266+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:14.841179+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:15.841415+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:16.841833+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:17.841971+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:18.842453+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:19.842936+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:20.843223+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:21.843570+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:22.843813+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 233.970153809s of 233.983871460s, submitted: 200
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:23.844305+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495054 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:24.844416+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 14286848 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:25.844768+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 14278656 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:26.844924+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 14278656 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:27.845172+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 14737408 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:28.845342+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 14704640 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:29.845531+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:30.845737+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:31.845950+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:32.846283+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:33.846518+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:34.846613+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:35.846848+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:36.847052+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:37.847312+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:38.847489+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:39.847660+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 14647296 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:40.847834+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122232832 unmapped: 14639104 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:41.848076+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122232832 unmapped: 14639104 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:42.848345+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122232832 unmapped: 14639104 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:43.848541+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122232832 unmapped: 14639104 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:44.848783+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122232832 unmapped: 14639104 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:45.849186+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122232832 unmapped: 14639104 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:46.849407+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:47.849556+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:48.849730+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:49.849977+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:50.850145+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:51.850367+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:52.850645+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:53.850875+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:54.851062+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:55.851267+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:56.851417+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:57.851573+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:58.851712+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:59.851851+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:00.851972+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:01.852141+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:02.852271+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:03.852445+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:04.852561+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:05.852732+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:06.852869+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:07.853016+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:08.853162+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:09.853259+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:10.853496+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:11.853660+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:12.853867+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:13.854028+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:14.854201+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:15.854332+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:16.854485+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:17.854629+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 14630912 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:18.854766+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:19.855154+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:20.855539+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:21.855806+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:22.856330+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:23.856574+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:24.857345+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:25.857530+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:26.857614+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:27.857960+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:28.858139+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:29.858506+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:30.858607+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:31.858803+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:32.859032+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:33.859250+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:34.859388+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:35.859618+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:36.859831+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 14622720 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:37.860042+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 14614528 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:38.860230+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 14614528 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:39.860365+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 14614528 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:40.860452+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 14614528 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:41.860599+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 14614528 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:42.860734+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 14614528 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:43.860915+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 14614528 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:44.861051+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 14614528 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:45.861187+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 14606336 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:46.861311+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 14606336 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:47.861427+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:48.861603+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 14606336 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:49.861766+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 14606336 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:50.861939+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 14606336 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:51.862234+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:52.862383+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:53.862540+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:54.862664+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:55.862793+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:56.862923+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:57.863045+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:58.863183+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:59.863314+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:00.863395+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:01.863516+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:02.863640+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:03.863795+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:04.863969+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:05.864135+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:06.864271+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 14598144 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:07.864394+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:08.864562+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:09.864681+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:10.864807+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:11.864960+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:12.865144+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:13.865470+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:14.865606+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:15.865738+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:16.865867+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:17.865996+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:18.866158+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:19.866288+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:20.866404+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:21.866553+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:22.866692+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:23.867074+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:24.867459+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:25.867748+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 14589952 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:26.868079+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 14581760 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:27.868244+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 14581760 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:28.868381+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 14581760 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:29.868573+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 14581760 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:30.868730+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 14581760 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:31.868932+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 14581760 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:32.869084+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 14581760 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:33.869334+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 14581760 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:34.869502+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 14581760 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:35.869955+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:36.870277+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:37.870411+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:38.870524+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:39.870676+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:40.870843+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:41.870993+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:42.871142+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:43.871348+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:44.871484+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:45.871610+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:46.871743+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:47.871864+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:48.871993+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:49.872139+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:50.872275+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:51.872434+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:52.872593+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:53.872750+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:54.872875+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 14573568 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:55.873008+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:56.873136+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:57.873284+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:58.873397+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:59.873570+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:00.873654+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:01.873766+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:02.873865+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:03.874005+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:04.874168+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:05.874312+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:06.874425+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:07.874559+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:08.874677+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:09.874821+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:10.874943+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:11.875046+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:12.875249+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:13.875423+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:14.875572+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:15.875675+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:16.875919+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:17.876265+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:18.876463+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:19.876583+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:20.876714+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 14565376 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:21.876877+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:22.877045+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:23.877262+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:24.877393+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:25.877581+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:26.877718+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:27.877937+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:28.878183+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:29.878399+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:30.878564+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:31.878791+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:32.879209+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:33.879646+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:34.879889+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:35.880066+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:36.880223+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:37.880401+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:38.880566+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:39.880700+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:40.881004+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:41.881241+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:42.881386+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:43.881628+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:44.881812+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:45.881952+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 14557184 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:46.882181+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:47.882333+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:48.882556+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:49.882743+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:50.882910+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:51.883145+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:52.883310+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:53.883481+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:54.883614+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:55.883770+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:56.883963+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:57.884154+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:58.884317+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets getting new tickets!
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:59.884740+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _finish_auth 0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:59.885634+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:00.884946+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 14548992 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:01.885084+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 14540800 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:02.885241+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 14540800 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:03.885404+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 14540800 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:04.885582+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 14540800 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:05.885783+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 14532608 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:06.885973+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 14532608 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:07.886147+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 14532608 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:08.886280+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 14532608 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:09.886434+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 14532608 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:10.886585+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 14532608 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:11.886760+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 ms_handle_reset con 0x559007747800 session 0x5590095516c0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x55900887dc00
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc ms_handle_reset ms_handle_reset con 0x5590067fbc00
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: get_auth_request con 0x55900887e800 auth_method 0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: mgrc handle_mgr_configure stats_period=5
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122658816 unmapped: 14213120 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:12.886936+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 ms_handle_reset con 0x5590091bdc00 session 0x559007203500
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x55900a375800
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122658816 unmapped: 14213120 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 ms_handle_reset con 0x559008a1cc00 session 0x559007664a80
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x559006df2400
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:13.887136+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:14.887269+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:15.887410+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:16.887547+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:17.887675+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:18.887823+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:19.887950+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-mon[75358]: from='client.14922 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:20.888144+0000)
Dec 04 11:02:05 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2293761273' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-mon[75358]: from='client.14926 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:21.888309+0000)
Dec 04 11:02:05 compute-0 ceph-mon[75358]: pgmap v1585: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-mon[75358]: from='client.14930 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:22.888474+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:23.888663+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:24.888794+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:25.888945+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:26.889181+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:27.889333+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:28.889470+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:29.889605+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:30.889787+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:31.889941+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:32.890072+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:33.890255+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:34.890428+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:35.890591+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:36.890743+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:37.890912+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:38.891048+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:39.891171+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:40.891304+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:41.891437+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:42.891549+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:43.891748+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:44.891953+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:45.892105+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:46.892247+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:47.892434+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:48.892583+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:49.892716+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:50.892857+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:51.892991+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:52.893178+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:53.893403+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 14204928 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:54.893568+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:55.893700+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:56.893895+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:57.894057+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:58.894487+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:59.894711+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:00.894925+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:01.895071+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:02.895151+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:03.895325+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:04.895487+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:05.895616+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:06.895766+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:07.895893+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:08.896154+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:09.896316+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122675200 unmapped: 14196736 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:10.896459+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 14188544 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:11.896584+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 14188544 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:12.896783+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 14188544 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:13.897019+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 14188544 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:14.897139+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 14188544 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 ms_handle_reset con 0x55900887c400 session 0x559009083a40
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x55900a378c00
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:15.897279+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 14188544 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:16.897425+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 14188544 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:17.897598+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 14188544 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:18.897793+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 14180352 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:19.898194+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 14180352 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:20.898323+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 14180352 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:21.898472+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 14180352 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:22.898591+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 14180352 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:23.898731+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 298.836151123s of 300.203460693s, submitted: 90
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 14147584 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:24.898854+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 14147584 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:25.898975+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 14147584 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:26.899132+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 14139392 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:27.899248+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 14139392 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:28.899926+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 14139392 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:29.900046+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:05 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:05 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 14139392 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:30.900160+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122748928 unmapped: 14123008 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:31.900295+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'config show' '{prefix=config show}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}'
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 14180352 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:32.900446+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 14409728 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:33.900608+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 14270464 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: tick
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Dec 04 11:02:05 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:34.900730+0000)
Dec 04 11:02:05 compute-0 ceph-osd[87071]: do_command 'log dump' '{prefix=log dump}'
Dec 04 11:02:05 compute-0 nova_compute[244644]: 2025-12-04 11:02:05.174 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:02:05 compute-0 nova_compute[244644]: 2025-12-04 11:02:05.198 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:02:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 04 11:02:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2697737041' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 04 11:02:05 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14934 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:05 compute-0 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 04 11:02:05 compute-0 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T11:02:05.326+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 04 11:02:05 compute-0 nova_compute[244644]: 2025-12-04 11:02:05.357 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:02:05 compute-0 rsyslogd[1007]: imjournal from <np0005545273:ceph-osd>: begin to drop messages due to rate-limiting
Dec 04 11:02:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 04 11:02:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967597012' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 04 11:02:05 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 04 11:02:05 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510338073' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 04 11:02:06 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2697737041' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 04 11:02:06 compute-0 ceph-mon[75358]: from='client.14934 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:06 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1967597012' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 04 11:02:06 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/510338073' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 04 11:02:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 04 11:02:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3675514162' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 04 11:02:06 compute-0 nova_compute[244644]: 2025-12-04 11:02:06.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:02:06 compute-0 nova_compute[244644]: 2025-12-04 11:02:06.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:02:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 04 11:02:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3011056488' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 04 11:02:06 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:02:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 04 11:02:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911783493' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 04 11:02:06 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 04 11:02:06 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2681961303' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 04 11:02:07 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3675514162' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 04 11:02:07 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3011056488' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 04 11:02:07 compute-0 ceph-mon[75358]: pgmap v1586: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 04 11:02:07 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2911783493' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 04 11:02:07 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2681961303' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 04 11:02:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 04 11:02:07 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313349740' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 04 11:02:07 compute-0 nova_compute[244644]: 2025-12-04 11:02:07.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 04 11:02:07 compute-0 nova_compute[244644]: 2025-12-04 11:02:07.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 04 11:02:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 04 11:02:07 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158855771' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 04 11:02:07 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 04 11:02:07 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637628572' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 04 11:02:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 04 11:02:08 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462030105' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 04 11:02:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 04 11:02:08 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1746933685' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 04 11:02:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3313349740' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 04 11:02:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/158855771' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 04 11:02:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3637628572' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 04 11:02:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1462030105' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 04 11:02:08 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1746933685' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 04 11:02:08 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 04 11:02:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 04 11:02:08 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248520119' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 04 11:02:08 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 04 11:02:08 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/684270108' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 04 11:02:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 04 11:02:09 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387657565' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 04 11:02:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 04 11:02:09 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4188323657' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 04 11:02:09 compute-0 ceph-mon[75358]: pgmap v1587: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 04 11:02:09 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1248520119' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 04 11:02:09 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/684270108' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 04 11:02:09 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2387657565' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:36.038320+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:37.038484+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:38.038637+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:39.038765+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:40.038943+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:41.039089+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:42.039258+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:43.039397+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:44.039543+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:45.039723+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:46.039902+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:47.040108+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:48.040337+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:49.040488+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:50.040620+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:51.040782+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:52.040972+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:53.041145+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:54.041299+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:55.041539+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:56.041827+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:57.042026+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:58.042183+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:28:59.042299+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:00.042431+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:01.042758+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:02.042998+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:03.043142+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:04.043282+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:05.043388+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:06.043628+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:07.043882+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:08.044010+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:09.044177+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:10.044307+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:11.044485+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:12.044635+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:13.044816+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:14.044993+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:15.045167+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:16.045291+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:17.045454+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:18.045595+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:19.045741+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:20.045882+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:21.046044+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:22.046171+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:23.046319+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:24.046443+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:25.046562+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:26.046682+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:27.047155+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:28.047273+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:29.047415+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:30.047550+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:31.047700+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:32.047882+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:33.048041+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:34.048171+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:35.048337+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:36.048480+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:37.049061+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:38.049240+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:39.049368+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:40.049538+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:41.049812+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:42.049921+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:43.050146+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:44.050285+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:45.050389+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:46.050500+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:47.050670+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:48.050809+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:49.050967+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:50.051153+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:51.051359+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:52.051518+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:53.051647+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:54.051794+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:55.051937+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:56.052061+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1409024 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:57.052310+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1409024 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:58.052426+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:29:59.052553+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:00.052816+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:01.052997+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:02.053241+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:03.053445+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:04.053590+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:05.053721+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:06.053872+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:07.054038+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc ms_handle_reset ms_handle_reset con 0x561162cce000
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: get_auth_request con 0x561165051800 auth_method 0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_configure stats_period=5
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:08.054203+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:09.054413+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:10.054567+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:11.054690+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:12.054821+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:13.054972+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 ms_handle_reset con 0x561162552400 session 0x5611613c9340
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165051c00
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:14.055120+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:15.055303+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:16.055455+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:17.055651+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:18.055756+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:19.055875+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:20.056056+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:21.056202+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:22.056363+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:23.056503+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:24.056646+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:25.056784+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:26.056926+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:27.057156+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:28.057311+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:29.057470+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:30.057595+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:31.057757+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:32.057984+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:33.058186+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:34.058355+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:35.058538+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:36.058723+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:37.058966+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:38.059188+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:39.059366+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:40.059499+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:41.059639+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:42.059781+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:43.060252+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:44.060405+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:45.060533+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:46.060644+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:47.060799+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:48.060922+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:49.061094+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:50.061335+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:51.061483+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:52.061608+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:53.061754+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:54.061865+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:55.061995+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:56.062139+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:57.062292+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:58.062435+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:30:59.062614+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:00.062760+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:01.062880+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:02.063051+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:03.063183+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:04.063317+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:05.063447+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:06.063607+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:07.063873+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:08.064028+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:09.064182+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:10.064324+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:11.064452+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:12.064572+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:13.064718+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:14.064872+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:15.065065+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:16.065231+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:17.065445+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:18.065634+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:19.065816+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:20.065975+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:21.066136+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994621 data_alloc: 218103808 data_used: 2977
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:22.066270+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:23.066573+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.918304443s of 300.103698730s, submitted: 106
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561163aeb000
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:24.066746+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:25.066917+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:26.067177+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:27.067396+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:28.067545+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:29.067672+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:30.067843+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:31.067972+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:32.068185+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:33.068347+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:34.068496+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:35.068640+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:36.068926+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:37.069163+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:38.069318+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:39.069496+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:40.069686+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:41.069817+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:42.069971+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:43.070159+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:44.070367+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:45.070592+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:46.070789+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:47.070964+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:48.071111+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:49.071249+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:50.071396+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:51.071537+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:52.071728+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:53.071892+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:54.072243+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:55.072457+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:56.072650+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:57.072868+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:58.073037+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:31:59.073147+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:00.073275+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:01.073402+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:02.073535+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:03.073731+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:04.073873+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:05.074071+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:06.074248+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:07.074487+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:08.074655+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:09.074783+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:10.074932+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:11.075081+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:12.075308+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:13.075547+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:14.075733+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:15.075890+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:16.076033+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:17.076220+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:18.076395+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:19.076542+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:20.076728+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:21.076959+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:22.077153+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:23.077455+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:24.077660+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:25.077803+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:26.077961+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:27.078160+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:28.078316+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:29.078446+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:30.078650+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:31.078824+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:32.078985+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:33.079192+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:34.079442+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:35.079582+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:36.079720+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:37.080053+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:38.080140+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:39.080263+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:40.080426+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:41.080618+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:42.080782+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:43.081068+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:44.081335+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:45.081517+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:46.081694+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:47.081932+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:48.082105+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:49.082327+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:50.082504+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:51.082735+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:52.082915+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:53.083077+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:54.083289+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:55.083498+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:56.083697+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:57.084018+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:58.084250+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:32:59.084441+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:00.084664+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:01.084929+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:02.085173+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:03.085362+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:04.085614+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:05.085904+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:06.086175+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:07.086411+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:08.086650+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:09.086876+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:10.087188+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:11.087499+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:12.087847+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:13.088126+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:14.088520+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:15.088726+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:16.088938+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:17.089228+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:18.089527+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:19.089767+0000)
Dec 04 11:02:09 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14966 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:20.089951+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:21.090231+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:22.090451+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:23.090660+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:24.090854+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:25.091030+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:26.091258+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:27.091483+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:28.091660+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:29.091877+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:30.092164+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:31.092371+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:32.092538+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:33.092710+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:34.092931+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:35.093184+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:36.093368+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:37.093658+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:38.093912+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:39.094148+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:40.094370+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:41.094564+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:42.094744+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:43.094896+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:44.095120+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:45.095317+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:46.095526+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:47.095712+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:48.095889+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:49.096049+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:50.096268+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:51.096481+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:52.096673+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:53.096992+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:54.097267+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:55.097512+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:56.097740+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:57.097955+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:58.098302+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:33:59.098553+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:00.098916+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:01.099237+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:02.099440+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:03.099629+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:04.099859+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:05.100061+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:06.100380+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:07.100623+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:08.100843+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:09.101078+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:10.101299+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:11.101484+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:12.101726+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:13.102182+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:14.102765+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:15.102966+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:16.103219+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:17.103552+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:18.103723+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:19.103900+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:20.104071+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread fragmentation_score=0.000116 took=0.000017s
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:21.104304+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:22.104485+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:23.104663+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:24.104857+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:25.105041+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:26.105214+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:27.105540+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:28.105728+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:29.105928+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:30.106136+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:31.106319+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:32.106514+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:33.106688+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:34.107038+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:35.107268+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:36.107452+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:37.107755+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:38.107940+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:39.108216+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:40.108580+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:41.108756+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:42.108936+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:43.109128+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:44.109351+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:45.109585+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:46.109820+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:47.110252+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:48.110818+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:49.111309+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:50.111808+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:51.112012+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:52.112346+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5692 writes, 24K keys, 5692 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5692 writes, 915 syncs, 6.22 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:53.112792+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:54.112973+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:55.113152+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:56.113400+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:57.113558+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:58.113708+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:34:59.113855+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:00.113988+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:01.114247+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:02.114420+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:03.114639+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:04.114851+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:05.115062+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:06.115271+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:07.115502+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:08.115718+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:09.115975+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:10.116174+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:11.423808+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:12.423985+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:13.424174+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:14.424366+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:15.424522+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:16.424663+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:17.424874+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:18.425019+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:19.425138+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:20.425276+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:21.425439+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:22.425582+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:23.425711+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:24.425841+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:25.425983+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:26.426136+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:27.426336+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:28.426510+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:29.426703+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:30.426866+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:31.427035+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:32.427165+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:33.427274+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:34.427401+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:35.427523+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:36.427662+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:37.427811+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:38.427986+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:39.428302+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:40.428495+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:41.428663+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:42.428867+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:43.429009+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:44.429171+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:45.429296+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:46.429791+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:47.429928+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:48.430366+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:49.430529+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:50.430678+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:51.430823+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:52.431054+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:53.431248+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:54.431669+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:55.431818+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:56.432052+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:57.432348+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:58.432494+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:35:59.432607+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:00.432766+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:01.432950+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:02.433210+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:03.433397+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:04.433551+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:05.433758+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:06.434005+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:07.434430+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:08.434601+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:09.434785+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:10.435020+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:11.435290+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:12.435508+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:13.435661+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:14.435819+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:15.436063+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:16.436250+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:17.436424+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:18.436588+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:19.436782+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:20.436909+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:21.437018+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:22.437135+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.937805176s of 299.970245361s, submitted: 18
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:23.437243+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:24.437370+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:25.437495+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:26.437619+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:27.437779+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:28.437911+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:29.437975+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:30.438138+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:31.438321+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:32.438558+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:33.438680+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:34.438844+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:35.438981+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:36.439119+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:37.439262+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:38.439395+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:39.439560+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:40.439717+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:41.439939+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:42.440094+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:43.440296+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:44.440425+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:45.440578+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:46.440714+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:47.440881+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:48.440996+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:49.441155+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:50.441321+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:51.441701+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:52.441854+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:53.442018+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:54.442195+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:55.442379+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:56.442603+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:57.442827+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:58.443028+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:36:59.443172+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:00.443337+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:01.443494+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:02.443757+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:03.443953+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:04.444162+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:05.444524+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:06.444721+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:07.444910+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:08.445061+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:09.445222+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:10.445349+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:11.445497+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:12.445685+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:13.445873+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:14.446019+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:15.446180+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:16.446309+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:17.446487+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:18.446698+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:19.446850+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:20.447065+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:21.447266+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:22.447444+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:23.447616+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:24.478660+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:25.478842+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:26.479030+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:27.479253+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:28.479379+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:29.479502+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:30.479649+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:31.479802+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:32.479922+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:33.480069+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:34.480222+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:35.480329+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:36.480468+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:37.480685+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:38.480790+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:39.480931+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:40.481023+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:41.481186+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:42.481334+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:43.481489+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:44.481646+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:45.481794+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:46.481941+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:47.482202+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:48.482348+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:49.482492+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:50.482659+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:51.482826+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:52.482965+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:53.483238+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:54.483377+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:55.483492+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:56.483660+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:57.483829+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:58.483997+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:37:59.484178+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:00.484312+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:01.484591+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:02.484755+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:03.485006+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:04.486062+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:05.486623+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:06.487492+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:07.488016+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:08.488267+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:09.488394+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:10.489009+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:11.489283+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:12.489598+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:13.489814+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:14.490205+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:15.490617+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:16.490990+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:17.491345+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995773 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:18.491647+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:19.491933+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:20.492254+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xa9091/0x167000, compress 0x0/0x0/0x0, omap 0xda3f, meta 0x2bc25c1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 117 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 117.103752136s of 117.262046814s, submitted: 106
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165780400
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 98304 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:21.492583+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 120 ms_handle_reset con 0x561165780400 session 0x5611655cd880
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:22.492821+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 16867328 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048276 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165780c00
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:23.492962+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 16711680 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 121 ms_handle_reset con 0x561165780c00 session 0x5611653556c0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:24.493146+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:25.493419+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fc244000/0x0/0x4ffc00000, data 0xd1ffb0/0xde4000, compress 0x0/0x0/0x0, omap 0x11804, meta 0x2bbe7fc), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:26.493661+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:27.493920+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076723 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:28.494123+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:29.494341+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 16670720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:30.494483+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:31.494697+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:32.494912+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:33.495163+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:34.495315+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:35.495456+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:36.495595+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:37.495835+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:38.496018+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:39.496177+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:40.496370+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:41.496607+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:42.496812+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:43.497089+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 16654336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 10
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:44.497344+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:45.497545+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:46.497743+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:47.498369+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:48.498565+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:49.498773+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:50.499075+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 16826368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:51.499308+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 16818176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:52.499510+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 16818176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc243000/0x0/0x4ffc00000, data 0xd21a2f/0xde7000, compress 0x0/0x0/0x0, omap 0x11b07, meta 0x2bbe4f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078745 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:53.499739+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 16818176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 11
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:54.499953+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:55.500151+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.303222656s of 35.438919067s, submitted: 47
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:56.500428+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:57.500732+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc240000/0x0/0x4ffc00000, data 0xd23634/0xdea000, compress 0x0/0x0/0x0, omap 0x11dc1, meta 0x2bbe23f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081519 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:58.500907+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:38:59.501048+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:00.501246+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:01.501391+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:02.501678+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081519 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:03.501935+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc240000/0x0/0x4ffc00000, data 0xd23634/0xdea000, compress 0x0/0x0/0x0, omap 0x11dc1, meta 0x2bbe23f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc240000/0x0/0x4ffc00000, data 0xd23634/0xdea000, compress 0x0/0x0/0x0, omap 0x11dc1, meta 0x2bbe23f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:04.502163+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:05.502377+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:06.502576+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:07.502775+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084293 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:08.513285+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:09.513512+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:10.513710+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:11.513846+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:12.514066+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084293 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:13.514253+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:14.514442+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:15.514671+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:16.514818+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:17.515054+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084293 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:18.515238+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:19.515429+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:20.515555+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:21.515698+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165781000
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.509790421s of 25.738904953s, submitted: 31
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23d000/0x0/0x4ffc00000, data 0xd250b3/0xded000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:22.515835+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085985 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:23.515971+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23c000/0x0/0x4ffc00000, data 0xd2514e/0xdee000, compress 0x0/0x0/0x0, omap 0x12054, meta 0x2bbdfac), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:24.516181+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:25.516335+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:26.516453+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:27.516628+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 16621568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc239000/0x0/0x4ffc00000, data 0xd26d53/0xdf1000, compress 0x0/0x0/0x0, omap 0x12311, meta 0x2bbdcef), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088615 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:28.516876+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:29.517261+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc239000/0x0/0x4ffc00000, data 0xd26d53/0xdf1000, compress 0x0/0x0/0x0, omap 0x12311, meta 0x2bbdcef), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:30.517436+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:31.517708+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:32.517942+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088615 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:33.518131+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16769024 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.492445946s of 12.550980568s, submitted: 38
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:34.518331+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:35.518473+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:36.518617+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:37.518850+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091389 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:38.519176+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:39.519394+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:40.519657+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:41.519791+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:42.520192+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091389 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:43.520372+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:44.520899+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:45.521061+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:46.521202+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:47.521411+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091389 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:48.521541+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:49.521774+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:50.521998+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:51.522178+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 16760832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.262409210s of 18.273027420s, submitted: 11
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:52.522306+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 16728064 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092361 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 12
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:53.522481+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd2886d/0xdf5000, compress 0x0/0x0/0x0, omap 0x125d5, meta 0x2bbda2b), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd287d2/0xdf4000, compress 0x0/0x0/0x0, omap 0x12836, meta 0x2bbd7ca), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:54.522704+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:55.523032+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:56.523181+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:57.523383+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 15622144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095743 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc233000/0x0/0x4ffc00000, data 0xd2a3d7/0xdf7000, compress 0x0/0x0/0x0, omap 0x12af6, meta 0x2bbd50a), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:58.523564+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 15613952 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:39:59.523758+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc233000/0x0/0x4ffc00000, data 0xd2a50d/0xdf9000, compress 0x0/0x0/0x0, omap 0x12af6, meta 0x2bbd50a), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 15613952 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:00.523925+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 15613952 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc22e000/0x0/0x4ffc00000, data 0xd2c152/0xdfc000, compress 0x0/0x0/0x0, omap 0x12db8, meta 0x2bbd248), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:01.524064+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 15605760 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:02.524304+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 14499840 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:03.524476+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104979 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc229000/0x0/0x4ffc00000, data 0xd2dda7/0xdff000, compress 0x0/0x0/0x0, omap 0x13240, meta 0x2bbcdc0), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.146455765s of 11.366581917s, submitted: 72
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 14491648 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:04.524683+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fc22a000/0x0/0x4ffc00000, data 0xd2dd0c/0xdfe000, compress 0x0/0x0/0x0, omap 0x13240, meta 0x2bbcdc0), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 13410304 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:05.524835+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 13344768 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 130 handle_osd_map epochs [131,132], i have 130, src has [1,132]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:06.525018+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 13246464 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:07.525224+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:08.525677+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117249 data_alloc: 218103808 data_used: 4361
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc220000/0x0/0x4ffc00000, data 0xd34ce6/0xe0a000, compress 0x0/0x0/0x0, omap 0x13a95, meta 0x2bbc56b), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:09.525881+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 133 handle_osd_map epochs [134,135], i have 133, src has [1,135]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:10.526038+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 13238272 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc21a000/0x0/0x4ffc00000, data 0xd38436/0xe10000, compress 0x0/0x0/0x0, omap 0x13d70, meta 0x2bbc290), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:11.526183+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 13238272 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:12.526423+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 13238272 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:13.526620+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122177 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc21c000/0x0/0x4ffc00000, data 0xd38300/0xe0e000, compress 0x0/0x0/0x0, omap 0x13d70, meta 0x2bbc290), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.265845299s of 10.467185020s, submitted: 143
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:14.526840+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:15.527186+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:16.527428+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:17.527657+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39dff/0xe11000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:18.527846+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124519 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:19.528051+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:20.528243+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39dff/0xe11000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:21.528443+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:22.528617+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:23.528776+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124519 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 13205504 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.179412842s of 10.185050011s, submitted: 10
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:24.528901+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39dff/0xe11000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:25.529033+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:26.529229+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:27.529461+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:28.529835+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126211 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:29.530038+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:30.530202+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:31.530408+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:32.530564+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:33.530745+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126211 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:34.530928+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:35.531051+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:36.531205+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:37.531375+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:38.531518+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126211 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:39.531713+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:40.531864+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:41.532019+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:42.532226+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.345500946s of 18.347776413s, submitted: 1
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc218000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:43.532384+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127183 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:44.532510+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 13164544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:45.532645+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39f35/0xe13000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:46.532775+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:47.532925+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:48.533057+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127039 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:49.533213+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:50.533363+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:51.533522+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39f35/0xe13000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:52.533685+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:53.533850+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127039 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.034024239s of 11.041786194s, submitted: 3
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:54.534006+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:55.534164+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:56.534361+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 13197312 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:57.534538+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 12779520 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:58.534686+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125347 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 12779520 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 13
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:40:59.534876+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 12722176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:00.534992+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 12722176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc21a000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:01.535168+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 12722176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:02.535319+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 12722176 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:03.535465+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125347 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc21a000/0x0/0x4ffc00000, data 0xd39e9a/0xe12000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.244282722s of 10.261468887s, submitted: 135
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 12713984 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:04.535643+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 12713984 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:05.535786+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 12451840 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:06.535927+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 12451840 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:07.536149+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc20a000/0x0/0x4ffc00000, data 0xd48ce3/0xe22000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 12443648 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:08.536299+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131365 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 12001280 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:09.536527+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 11919360 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:10.536673+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 10600448 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:11.536879+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 10264576 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:12.537032+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc1d2000/0x0/0x4ffc00000, data 0xd7ff44/0xe5a000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 10264576 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:13.537220+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137373 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 10149888 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:14.537373+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 10149888 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:15.537517+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.977731705s of 11.395256042s, submitted: 22
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 10108928 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:16.537660+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 10108928 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:17.537826+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 10108928 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:18.537976+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135945 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc1cf000/0x0/0x4ffc00000, data 0xd82fff/0xe5d000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 10305536 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:19.538172+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 10305536 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:20.538364+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 10256384 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:21.538523+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 9994240 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:22.538715+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc19b000/0x0/0x4ffc00000, data 0xdb8c69/0xe91000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 9805824 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:23.538879+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137531 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 9609216 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc17e000/0x0/0x4ffc00000, data 0xdd5d75/0xeae000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:24.539041+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 9609216 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:25.539211+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.382287979s of 10.000560760s, submitted: 29
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 9969664 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:26.539357+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc15b000/0x0/0x4ffc00000, data 0xdf8a8e/0xed1000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x2bbbd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 8568832 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:27.539508+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 7479296 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:28.539664+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140175 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 7315456 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:29.539830+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf95000/0x0/0x4ffc00000, data 0xe1eb45/0xef7000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87056384 unmapped: 6266880 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:30.539981+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf7c000/0x0/0x4ffc00000, data 0xe3717a/0xf10000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 5857280 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:31.540164+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87678976 unmapped: 5644288 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:32.540330+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87400448 unmapped: 5922816 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:33.540465+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140755 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87523328 unmapped: 5799936 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:34.540594+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 5865472 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf55000/0x0/0x4ffc00000, data 0xe5e360/0xf37000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:35.540762+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.452430725s of 10.001968384s, submitted: 39
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 5865472 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf3b000/0x0/0x4ffc00000, data 0xe78260/0xf51000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:36.540884+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 5865472 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:37.541147+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 5382144 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:38.541340+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144413 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 heartbeat osd_stat(store_statfs(0x4faf1e000/0x0/0x4ffc00000, data 0xe96c80/0xf6e000, compress 0x0/0x0/0x0, omap 0x142d7, meta 0x3d5bd29), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87998464 unmapped: 5324800 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:39.541512+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88006656 unmapped: 5316608 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:40.541765+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 5562368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:41.541913+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 5562368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:42.542074+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 137 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xec381d/0xf9c000, compress 0x0/0x0/0x0, omap 0x1451f, meta 0x3d5bae1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 5562368 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:43.542240+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149975 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 137 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xec381d/0xf9c000, compress 0x0/0x0/0x0, omap 0x1451f, meta 0x3d5bae1), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 5390336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:44.542419+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 5390336 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:45.542604+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 137 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xec381d/0xf9c000, compress 0x0/0x0/0x0, omap 0x1451f, meta 0x3d5bae1), peers [1,2] op hist [0,0,0,0,0,0,0,3])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.731677055s of 10.078829765s, submitted: 42
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 5701632 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:46.542778+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faee2000/0x0/0x4ffc00000, data 0xecdfcc/0xfa8000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 4562944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:47.543040+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 4562944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:48.543258+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151029 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 4562944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:49.543418+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faed5000/0x0/0x4ffc00000, data 0xedc6c9/0xfb7000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 4497408 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:50.543577+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:51.543752+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faeba000/0x0/0x4ffc00000, data 0xef76f2/0xfd2000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:52.543899+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 4268032 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:53.544052+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153741 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 4268032 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:54.544190+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88694784 unmapped: 4628480 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:55.544300+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 4603904 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:56.544446+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 4603904 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:57.544642+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae92000/0x0/0x4ffc00000, data 0xf1f503/0xffa000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 4603904 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.566334724s of 12.781497002s, submitted: 36
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:58.544781+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158537 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88989696 unmapped: 4333568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae67000/0x0/0x4ffc00000, data 0xf4a751/0x1025000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:41:59.544907+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88989696 unmapped: 4333568 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:00.545042+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 4325376 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:01.545210+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae5d000/0x0/0x4ffc00000, data 0xf545d3/0x102f000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 3923968 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:02.545354+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 3923968 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:03.545563+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156873 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 3923968 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:04.545730+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:05.546015+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae32000/0x0/0x4ffc00000, data 0xf7d583/0x1059000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:06.546174+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 4382720 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:07.546342+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 89038848 unmapped: 4284416 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:08.546495+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161725 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.631993294s of 10.691933632s, submitted: 33
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90374144 unmapped: 2949120 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:09.546721+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fae02000/0x0/0x4ffc00000, data 0xfaede7/0x108a000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 2924544 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:10.546903+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 2621440 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:11.547043+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 2621440 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:12.547209+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90710016 unmapped: 2613248 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:13.547389+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163151 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90849280 unmapped: 2473984 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:14.547536+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90849280 unmapped: 2473984 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:15.547657+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fadd5000/0x0/0x4ffc00000, data 0xfdd2ba/0x10b7000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90341376 unmapped: 2981888 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:16.547865+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90431488 unmapped: 2891776 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:17.548086+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90431488 unmapped: 2891776 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:18.548307+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167259 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90431488 unmapped: 2891776 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:19.548453+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.624699593s of 10.735450745s, submitted: 38
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fadbe000/0x0/0x4ffc00000, data 0xff222b/0x10cd000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90628096 unmapped: 2695168 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:20.548631+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90718208 unmapped: 2605056 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:21.548799+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90718208 unmapped: 2605056 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:22.548972+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 2703360 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:23.549178+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168531 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 2703360 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:24.549340+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fad6c000/0x0/0x4ffc00000, data 0x10463f2/0x1120000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 2703360 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:25.549532+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 2498560 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:26.549724+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 2482176 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:27.549901+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 2482176 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:28.550038+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169187 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fad25000/0x0/0x4ffc00000, data 0x108cd39/0x1167000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92184576 unmapped: 2187264 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:29.550175+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.091275215s of 10.204633713s, submitted: 47
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92184576 unmapped: 2187264 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:30.550318+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92356608 unmapped: 2015232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:31.550494+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91873280 unmapped: 2498560 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:32.550656+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91873280 unmapped: 2498560 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:33.550795+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174183 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4face3000/0x0/0x4ffc00000, data 0x10cd692/0x11a9000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91873280 unmapped: 2498560 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:34.550933+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4facc6000/0x0/0x4ffc00000, data 0x10ea658/0x11c6000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 2449408 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:35.551057+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4facbb000/0x0/0x4ffc00000, data 0x10f617e/0x11d1000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:36.551202+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92151808 unmapped: 2220032 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:37.551468+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 92151808 unmapped: 2220032 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:38.551647+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93380608 unmapped: 991232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181625 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:39.551773+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93380608 unmapped: 991232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:40.551914+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93380608 unmapped: 991232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.397225380s of 10.507322311s, submitted: 62
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:41.552064+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93577216 unmapped: 794624 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac75000/0x0/0x4ffc00000, data 0x113c22f/0x1217000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:42.552237+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 778240 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:43.552355+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 778240 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180405 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:44.552512+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:45.552688+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:46.552880+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:47.553043+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac74000/0x0/0x4ffc00000, data 0x113d483/0x1217000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:48.553200+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179687 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:49.553356+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:50.553505+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac74000/0x0/0x4ffc00000, data 0x113d483/0x1217000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:51.553673+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:52.553844+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:53.554017+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 770048 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179687 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac74000/0x0/0x4ffc00000, data 0x113d483/0x1217000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:54.554135+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:55.554262+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.933678627s of 15.948211670s, submitted: 9
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:56.554394+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:57.554555+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fac74000/0x0/0x4ffc00000, data 0x113d51e/0x1218000, compress 0x0/0x0/0x0, omap 0x14863, meta 0x3d5b79d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:58.554700+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178837 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:42:59.554833+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:00.554979+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 761856 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:01.555089+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:02.555271+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:03.555408+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fac70000/0x0/0x4ffc00000, data 0x113f088/0x121a000, compress 0x0/0x0/0x0, omap 0x14aac, meta 0x3d5b554), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181357 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:04.555591+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:05.555709+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:06.555879+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.532999039s of 10.572518349s, submitted: 22
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:07.556036+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:08.556183+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180637 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fac72000/0x0/0x4ffc00000, data 0x113f088/0x121a000, compress 0x0/0x0/0x0, omap 0x14aac, meta 0x3d5b554), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:09.556375+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:10.556559+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:11.556733+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:12.556886+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:13.557019+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac69000/0x0/0x4ffc00000, data 0x1140cc8/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189079 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac69000/0x0/0x4ffc00000, data 0x1140cc8/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:14.557346+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:15.557512+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:16.557648+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140cf6/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:17.557814+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:18.557926+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189173 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:19.558041+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 753664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.848041534s of 12.879414558s, submitted: 22
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:20.558174+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 745472 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6c000/0x0/0x4ffc00000, data 0x1140cf4/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:21.558365+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:22.558805+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:23.558965+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190131 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140ca2/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:24.559210+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:25.559599+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93634560 unmapped: 737280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:26.559727+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93642752 unmapped: 729088 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:27.559939+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93642752 unmapped: 729088 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:28.560212+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93642752 unmapped: 729088 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189173 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:29.560386+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140ca2/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:30.560664+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:31.560874+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140c5d/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:32.561194+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 720896 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.930911064s of 12.951424599s, submitted: 9
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:33.561376+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fac6b000/0x0/0x4ffc00000, data 0x1140c5d/0x1220000, compress 0x0/0x0/0x0, omap 0x14e07, meta 0x3d5b1f9), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 704512 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189317 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:34.561508+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 704512 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:35.561680+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 704512 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:36.561807+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93700096 unmapped: 1720320 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:37.562030+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 1712128 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fac66000/0x0/0x4ffc00000, data 0x1142807/0x1223000, compress 0x0/0x0/0x0, omap 0x15051, meta 0x3d5afaf), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:38.562230+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 1712128 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194437 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:39.562385+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 1703936 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:40.562620+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 1835008 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fac68000/0x0/0x4ffc00000, data 0x114287b/0x1224000, compress 0x0/0x0/0x0, omap 0x15051, meta 0x3d5afaf), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:41.562862+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 1802240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:42.563150+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93618176 unmapped: 1802240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.979999542s of 10.044813156s, submitted: 35
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:43.563301+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93675520 unmapped: 1744896 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196607 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:44.563443+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:45.563584+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac62000/0x0/0x4ffc00000, data 0x1145e0c/0x1228000, compress 0x0/0x0/0x0, omap 0x155c2, meta 0x3d5aa3e), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:46.563721+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:47.563862+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:48.564014+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199381 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:49.564184+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac62000/0x0/0x4ffc00000, data 0x1145e0c/0x1228000, compress 0x0/0x0/0x0, omap 0x155c2, meta 0x3d5aa3e), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:50.564344+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1687552 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:51.564516+0000)
Dec 04 11:02:09 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14968 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93749248 unmapped: 1671168 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:52.564667+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93749248 unmapped: 1671168 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac62000/0x0/0x4ffc00000, data 0x1145e0c/0x1228000, compress 0x0/0x0/0x0, omap 0x155c2, meta 0x3d5aa3e), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:53.564828+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93749248 unmapped: 1671168 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.310994148s of 11.361922264s, submitted: 33
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202155 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:54.565052+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93765632 unmapped: 1654784 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:55.565223+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:56.565373+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:57.565554+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:58.565722+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fac59000/0x0/0x4ffc00000, data 0x114952b/0x122f000, compress 0x0/0x0/0x0, omap 0x15b6a, meta 0x3d5a496), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207085 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:43:59.565815+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:00.565955+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93773824 unmapped: 1646592 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:01.566077+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 1638400 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:02.566308+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 1638400 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fac55000/0x0/0x4ffc00000, data 0x114b258/0x1234000, compress 0x0/0x0/0x0, omap 0x15e44, meta 0x3d5a1bc), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:03.566467+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 1638400 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212061 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:04.566607+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 1638400 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.653561592s of 10.730669022s, submitted: 58
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:05.566986+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fac52000/0x0/0x4ffc00000, data 0x114cc92/0x1236000, compress 0x0/0x0/0x0, omap 0x16112, meta 0x3d59eee), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 1581056 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:06.567154+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 1581056 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:07.567308+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 1581056 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fac52000/0x0/0x4ffc00000, data 0x114cbcb/0x1235000, compress 0x0/0x0/0x0, omap 0x16112, meta 0x3d59eee), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:08.567434+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 1630208 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211833 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:09.567581+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 1630208 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:10.567724+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 1630208 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 147 handle_osd_map epochs [148,149], i have 147, src has [1,149]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:11.567871+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93757440 unmapped: 1662976 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:12.568001+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fac4f000/0x0/0x4ffc00000, data 0x115021f/0x123a000, compress 0x0/0x0/0x0, omap 0x163dc, meta 0x3d59c24), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:13.568142+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218533 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:14.568291+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:15.568434+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.520431519s of 10.585706711s, submitted: 58
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:16.568713+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:17.568906+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93741056 unmapped: 1679360 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fac51000/0x0/0x4ffc00000, data 0x1150252/0x123a000, compress 0x0/0x0/0x0, omap 0x163dc, meta 0x3d59c24), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:18.569026+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224113 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:19.569163+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:20.569299+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:21.569421+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac47000/0x0/0x4ffc00000, data 0x11538ff/0x1240000, compress 0x0/0x0/0x0, omap 0x16981, meta 0x3d5967f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac47000/0x0/0x4ffc00000, data 0x11538ff/0x1240000, compress 0x0/0x0/0x0, omap 0x16981, meta 0x3d5967f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:22.569631+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:23.569780+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222929 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:24.569982+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:25.570160+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1826816 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:26.571211+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.172570229s of 11.231684685s, submitted: 44
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 1974272 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:27.572197+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 1974272 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fac3d000/0x0/0x4ffc00000, data 0x1161566/0x124e000, compress 0x0/0x0/0x0, omap 0x16981, meta 0x3d5967f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:28.573033+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93446144 unmapped: 1974272 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:29.573758+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228219 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 1957888 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:30.574314+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fabf4000/0x0/0x4ffc00000, data 0x11a64cc/0x1295000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 1597440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:31.574936+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 1597440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:32.575555+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94003200 unmapped: 1417216 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:33.576081+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fabf4000/0x0/0x4ffc00000, data 0x11a64cc/0x1295000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94003200 unmapped: 1417216 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:34.576590+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237541 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 712704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:35.576840+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 712704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fabc4000/0x0/0x4ffc00000, data 0x11d9022/0x12c7000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:36.577240+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 712704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.599710464s of 10.676040649s, submitted: 52
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:37.577403+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94814208 unmapped: 1654784 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:38.577741+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 1335296 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:39.578089+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235651 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fab80000/0x0/0x4ffc00000, data 0x121d659/0x130c000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 1335296 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:40.578246+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 1335296 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fab80000/0x0/0x4ffc00000, data 0x121d659/0x130c000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:41.578515+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 1949696 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:42.578804+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 1785856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:43.579088+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 1785856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:44.579317+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245715 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96272384 unmapped: 1245184 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4faaef000/0x0/0x4ffc00000, data 0x12ab5d1/0x139b000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:45.579468+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96567296 unmapped: 950272 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4faac5000/0x0/0x4ffc00000, data 0x12d5e1b/0x13c5000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:46.579651+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 819200 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:47.579898+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 573440 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.532475471s of 10.651388168s, submitted: 68
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:48.580026+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 573440 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:49.580260+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248267 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 573440 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:50.580417+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97009664 unmapped: 507904 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4faa71000/0x0/0x4ffc00000, data 0x132b958/0x141b000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:51.580693+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97001472 unmapped: 1564672 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:52.580879+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8145 writes, 31K keys, 8145 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8145 writes, 1973 syncs, 4.13 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2453 writes, 7270 keys, 2453 commit groups, 1.0 writes per commit group, ingest: 9.86 MB, 0.02 MB/s
                                           Interval WAL: 2453 writes, 1058 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97001472 unmapped: 1564672 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4faa31000/0x0/0x4ffc00000, data 0x136b62c/0x145a000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:53.581053+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97255424 unmapped: 1310720 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:54.581222+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253769 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1859584 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:55.581352+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97968128 unmapped: 1646592 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:56.581535+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98230272 unmapped: 1384448 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:57.581776+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98230272 unmapped: 1384448 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa9d2000/0x0/0x4ffc00000, data 0x13ca761/0x14b9000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:58.581943+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.463150978s of 10.566673279s, submitted: 59
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 1097728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa9d2000/0x0/0x4ffc00000, data 0x13ca761/0x14b9000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:44:59.582080+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259253 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 917504 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa981000/0x0/0x4ffc00000, data 0x141b86f/0x150a000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:00.582210+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 97968128 unmapped: 2695168 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:01.582323+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa96c000/0x0/0x4ffc00000, data 0x14320c0/0x1520000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [0,0,0,0,0,0,1])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98181120 unmapped: 2482176 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:02.582438+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa96c000/0x0/0x4ffc00000, data 0x14320c0/0x1520000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 2375680 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:03.582606+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 2375680 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:04.582744+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262117 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 2375680 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:05.582874+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 1114112 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:06.583034+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99549184 unmapped: 1114112 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc ms_handle_reset ms_handle_reset con 0x561165051800
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: get_auth_request con 0x561165735000 auth_method 0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_configure stats_period=5
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:07.583258+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99393536 unmapped: 1269760 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0x14cfe54/0x15be000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:08.583640+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.120785713s of 10.410181046s, submitted: 59
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1933312 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:09.583809+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266233 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1933312 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:10.583970+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa883000/0x0/0x4ffc00000, data 0x15196aa/0x1608000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 1974272 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:11.584184+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99901440 unmapped: 1810432 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:12.584322+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1638400 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 ms_handle_reset con 0x561165051c00 session 0x561163af4a80
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165780400
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:13.584444+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 1687552 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:14.584588+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269981 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa84c000/0x0/0x4ffc00000, data 0x15508d1/0x163f000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 1687552 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:15.584712+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:16.584846+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:17.584999+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:18.585197+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:19.585433+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266237 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.895224571s of 11.019706726s, submitted: 40
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x155539e/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:20.585583+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:21.585720+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x15553d0/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:22.585834+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:23.585985+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1671168 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:24.586125+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265949 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x155539e/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1671168 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:25.586265+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1671168 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:26.586400+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1671168 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:27.586610+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:28.586752+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x1555496/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:29.586912+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267657 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:30.587053+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.693873405s of 10.723943710s, submitted: 15
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:31.587394+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:32.587609+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:33.587768+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561165781c00
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555465/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:34.587915+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274281 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:35.588146+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 14
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:36.588418+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 1662976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:37.588659+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 1646592 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:38.588807+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fb/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 1646592 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:39.588971+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270147 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 1646592 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:40.589133+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.204983711s of 10.259576797s, submitted: 25
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 1646592 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:41.589279+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1638400 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:42.589435+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1638400 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:43.589579+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x1555468/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1638400 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:44.589759+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269413 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:45.589884+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:46.590009+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:47.590204+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:48.590360+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x155552f/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:49.590506+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271121 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x155552f/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:50.590661+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.689720154s of 10.207288742s, submitted: 16
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:51.590810+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fb/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:52.590951+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:53.591135+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:54.591255+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272669 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:55.591423+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa844000/0x0/0x4ffc00000, data 0x15555f7/0x1646000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 1630208 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:56.591556+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100089856 unmapped: 1622016 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa844000/0x0/0x4ffc00000, data 0x15555f7/0x1646000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:57.591715+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:58.591844+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:45:59.592019+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fc/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272063 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:00.592192+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:01.592394+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.431275368s of 10.478595734s, submitted: 21
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:02.592619+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:03.592784+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 1777664 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555467/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:04.592943+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270371 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:05.593130+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:06.593254+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:07.593457+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555435/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:08.593605+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:09.593733+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272079 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:10.593865+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 1769472 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:11.593981+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:12.594140+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:13.594295+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x1555597/0x1646000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x1555597/0x1646000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:14.594429+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273611 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.974966049s of 13.371937752s, submitted: 23
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:15.594591+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x15554d0/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:16.594773+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:17.594946+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fe/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:18.595064+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 712704 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:19.595203+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271329 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 704512 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:20.595332+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 704512 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:21.595500+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555435/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 704512 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:22.595739+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 679936 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:23.595906+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 638976 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:24.596066+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555531/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [1])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272717 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.805570602s of 10.000674248s, submitted: 108
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:25.596185+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 647168 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555531/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:26.596340+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555531/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:27.596503+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:28.596672+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa846000/0x0/0x4ffc00000, data 0x15554fc/0x1645000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:29.596817+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273021 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:30.596959+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:31.597147+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0x1555467/0x1644000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:32.597316+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 638976 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x15553d0/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:33.597479+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 630784 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:34.597690+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 630784 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271697 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:35.597886+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 630784 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.005593300s of 11.149421692s, submitted: 49
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:36.598050+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 622592 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa848000/0x0/0x4ffc00000, data 0x155539e/0x1643000, compress 0x0/0x0/0x0, omap 0x16d71, meta 0x3d5928f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:37.598348+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103211008 unmapped: 598016 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:38.598472+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103211008 unmapped: 598016 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:39.598631+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 589824 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275191 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x1556fd6/0x1646000, compress 0x0/0x0/0x0, omap 0x16fc0, meta 0x3d59040), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:40.598754+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 589824 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x1556fd6/0x1646000, compress 0x0/0x0/0x0, omap 0x16fc0, meta 0x3d59040), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:41.598904+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 581632 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:42.599066+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 581632 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa845000/0x0/0x4ffc00000, data 0x1556fa3/0x1646000, compress 0x0/0x0/0x0, omap 0x16fc0, meta 0x3d59040), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:43.599255+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 581632 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:44.599402+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 581632 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277965 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:45.599560+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 573440 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:46.599740+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 573440 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa840000/0x0/0x4ffc00000, data 0x1558a54/0x1649000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:47.599917+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 573440 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.530984879s of 11.597694397s, submitted: 42
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:48.600153+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 557056 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:49.600284+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 557056 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278937 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:50.600467+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 557056 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa840000/0x0/0x4ffc00000, data 0x1558abd/0x164a000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:51.600745+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 557056 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa840000/0x0/0x4ffc00000, data 0x1558abd/0x164a000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:52.600873+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:53.601058+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:54.601274+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280501 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:55.601502+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:56.601712+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:57.602029+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103260160 unmapped: 548864 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa83e000/0x0/0x4ffc00000, data 0x1558c4e/0x164c000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.142219543s of 10.170134544s, submitted: 14
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 154 ms_handle_reset con 0x561165781c00 session 0x5611630bce00
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0x1558bb3/0x164b000, compress 0x0/0x0/0x0, omap 0x1727e, meta 0x3d58d82), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:58.602265+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103448576 unmapped: 360448 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:46:59.602465+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103448576 unmapped: 360448 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 15
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281299 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:00.602726+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103473152 unmapped: 335872 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:01.602931+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103489536 unmapped: 319488 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:02.603117+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103489536 unmapped: 319488 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa83d000/0x0/0x4ffc00000, data 0x155a7b8/0x164e000, compress 0x0/0x0/0x0, omap 0x17563, meta 0x3d58a9d), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:03.603334+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:04.603543+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284073 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:05.603790+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:06.604046+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0x155c237/0x1651000, compress 0x0/0x0/0x0, omap 0x1784b, meta 0x3d587b5), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:07.604339+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0x155c208/0x1651000, compress 0x0/0x0/0x0, omap 0x1784b, meta 0x3d587b5), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:08.604481+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:09.604661+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287711 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0x155c208/0x1651000, compress 0x0/0x0/0x0, omap 0x1784b, meta 0x3d587b5), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:10.604818+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0x155c208/0x1651000, compress 0x0/0x0/0x0, omap 0x1784b, meta 0x3d587b5), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:11.604984+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.780637741s of 14.128336906s, submitted: 178
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:12.605137+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:13.605303+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:14.605486+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284917 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:15.605617+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 303104 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0x155dc7b/0x1651000, compress 0x0/0x0/0x0, omap 0x17b33, meta 0x3d584cd), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:16.605797+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:17.605973+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:18.606159+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:19.606335+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288411 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0x155dc7b/0x1651000, compress 0x0/0x0/0x0, omap 0x17b33, meta 0x3d584cd), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:20.606466+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:21.606615+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:22.606745+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:23.606843+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 1351680 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.096637726s of 12.130258560s, submitted: 22
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:24.607006+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa839000/0x0/0x4ffc00000, data 0x155dc7b/0x1651000, compress 0x0/0x0/0x0, omap 0x17b33, meta 0x3d584cd), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291185 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:25.607183+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:26.607363+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0x155f6fa/0x1654000, compress 0x0/0x0/0x0, omap 0x17e1c, meta 0x3d581e4), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:27.607546+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:28.607707+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:29.607872+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0x155f6fa/0x1654000, compress 0x0/0x0/0x0, omap 0x17e1c, meta 0x3d581e4), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291185 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:30.608000+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:31.608215+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103522304 unmapped: 1335296 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:32.608403+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 1327104 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:33.608529+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 1327104 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:34.608678+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 1327104 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0x155f6fa/0x1654000, compress 0x0/0x0/0x0, omap 0x17e1c, meta 0x3d581e4), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291185 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:35.608873+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 1327104 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:36.609016+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0x155f6fa/0x1654000, compress 0x0/0x0/0x0, omap 0x17e1c, meta 0x3d581e4), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.265516281s of 12.271112442s, submitted: 47
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:37.609163+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:38.609335+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:39.609498+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa833000/0x0/0x4ffc00000, data 0x15612ff/0x1657000, compress 0x0/0x0/0x0, omap 0x1806d, meta 0x3d57f93), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293959 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:40.610158+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:41.610685+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:42.611166+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa833000/0x0/0x4ffc00000, data 0x15612ff/0x1657000, compress 0x0/0x0/0x0, omap 0x1806d, meta 0x3d57f93), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:43.611716+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:44.612142+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293959 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:45.612304+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 1310720 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:46.612710+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:47.613148+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:48.613465+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:49.613775+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:50.614028+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:51.614303+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:52.614515+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:53.614711+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:54.614939+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:55.615186+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:56.615361+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:57.615635+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1277952 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:58.615833+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:47:59.615977+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:00.616170+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:01.616318+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:02.616443+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:03.616668+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:04.616885+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 1261568 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:05.617081+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:06.617380+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:07.617591+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:08.617771+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:09.617943+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:10.618080+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:11.618247+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:12.618384+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:13.618526+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:14.618660+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:15.618777+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:16.618924+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:17.619203+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:18.619358+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:19.619499+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296733 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:20.619671+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:21.619857+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:22.620023+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x1562d7e/0x165a000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:23.620178+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:24.620293+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.869987488s of 47.977024078s, submitted: 49
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298425 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:25.620429+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1562e19/0x165b000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:26.620553+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:27.620694+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:28.620884+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:29.621046+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1562e19/0x165b000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:30.621184+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298425 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1562e19/0x165b000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103612416 unmapped: 1245184 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:31.621279+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103538688 unmapped: 1318912 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:32.621443+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103538688 unmapped: 1318912 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa80c000/0x0/0x4ffc00000, data 0x158621e/0x1680000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:33.621620+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:34.621731+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 1253376 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.614818573s of 10.713614464s, submitted: 12
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:35.621856+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304117 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa7d5000/0x0/0x4ffc00000, data 0x15bf1d9/0x16b7000, compress 0x0/0x0/0x0, omap 0x183c0, meta 0x3d57c40), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 876544 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:36.622054+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 876544 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:37.622301+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 868352 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:38.622487+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 811008 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa7d0000/0x0/0x4ffc00000, data 0x15c0dde/0x16ba000, compress 0x0/0x0/0x0, omap 0x18612, meta 0x3d579ee), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:39.622643+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa7b5000/0x0/0x4ffc00000, data 0x15dbea1/0x16d5000, compress 0x0/0x0/0x0, omap 0x18612, meta 0x3d579ee), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 811008 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:40.622786+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311547 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1269760 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:41.622961+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 1269760 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:42.623123+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104923136 unmapped: 983040 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:43.623248+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104923136 unmapped: 983040 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa757000/0x0/0x4ffc00000, data 0x163c982/0x1735000, compress 0x0/0x0/0x0, omap 0x18612, meta 0x3d579ee), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:44.623401+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 1179648 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.679551125s of 10.199654579s, submitted: 49
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:45.623533+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313163 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa751000/0x0/0x4ffc00000, data 0x163e8af/0x1739000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104128512 unmapped: 1777664 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:46.623799+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104128512 unmapped: 1777664 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:47.624059+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104095744 unmapped: 1810432 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:48.624436+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104095744 unmapped: 1810432 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:49.624739+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 104112128 unmapped: 1794048 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:50.624942+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317627 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa6f9000/0x0/0x4ffc00000, data 0x1698ed9/0x1793000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105136128 unmapped: 1818624 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:51.625222+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa6f9000/0x0/0x4ffc00000, data 0x1698ed9/0x1793000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 1630208 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:52.625438+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa6f9000/0x0/0x4ffc00000, data 0x1698ed9/0x1793000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 1441792 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:53.625670+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 16
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 1441792 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:54.625911+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 1433600 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.847637177s of 10.000375748s, submitted: 37
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:55.626054+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319963 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 17
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 1425408 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:56.626269+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 1425408 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:57.626561+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa697000/0x0/0x4ffc00000, data 0x16fb25f/0x17f5000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 1343488 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:58.626758+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa697000/0x0/0x4ffc00000, data 0x16fb25f/0x17f5000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 1343488 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:48:59.627016+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 1343488 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:00.627203+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:01.627475+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:02.627751+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:03.627911+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:04.628082+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:05.628279+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:06.628467+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:07.628638+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:08.628915+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:09.629134+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:10.629378+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:11.629532+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:12.629748+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:13.629899+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:14.630250+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:15.630475+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:16.630678+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:17.630920+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:18.631057+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:19.631294+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:20.631448+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:21.631602+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:22.631723+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:23.631859+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:24.632006+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:25.632157+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:26.632304+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:27.632554+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:28.632701+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:29.632882+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:30.633039+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:31.633164+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:32.633365+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:33.633573+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:34.633704+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:35.633875+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:36.634048+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 2007040 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:37.634319+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:38.634446+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:39.634563+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:40.634705+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:41.634831+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:42.635017+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:43.635187+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:44.635400+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:45.635612+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:46.635786+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:47.636063+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:48.636302+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:49.636546+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa67b000/0x0/0x4ffc00000, data 0x171723f/0x1811000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 1998848 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:50.637171+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106012672 unmapped: 1990656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320303 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:51.637310+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106012672 unmapped: 1990656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:52.637688+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106012672 unmapped: 1990656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:53.638001+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106012672 unmapped: 1990656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:54.638197+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 59.528240204s of 60.003269196s, submitted: 7
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:55.638499+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106020864 unmapped: 1982464 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa669000/0x0/0x4ffc00000, data 0x1728b68/0x1823000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:56.638915+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106020864 unmapped: 1982464 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319891 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:57.639241+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106020864 unmapped: 1982464 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:58.639504+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 1810432 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:49:59.639716+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 1843200 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 18
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:00.640224+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 1785856 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:01.640529+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa648000/0x0/0x4ffc00000, data 0x174a3f8/0x1844000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 1728512 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321771 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:02.640827+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105316352 unmapped: 2686976 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:03.641015+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105316352 unmapped: 2686976 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:04.641497+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105316352 unmapped: 2686976 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa624000/0x0/0x4ffc00000, data 0x176dc69/0x1868000, compress 0x0/0x0/0x0, omap 0x18997, meta 0x3d57669), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.810779572s of 10.002739906s, submitted: 154
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:05.641654+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105472000 unmapped: 2531328 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:06.642196+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 2482176 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330825 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:07.642427+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 2301952 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:08.642680+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 2301952 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:09.642881+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 2301952 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:10.643192+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 2498560 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa5bc000/0x0/0x4ffc00000, data 0x17d43e5/0x18d0000, compress 0x0/0x0/0x0, omap 0x18bea, meta 0x3d57416), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:11.643431+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 2498560 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328873 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:12.643546+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 2498560 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:13.643832+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 2473984 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:14.644049+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 2473984 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:15.644229+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.534772873s of 10.144953728s, submitted: 35
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa5b7000/0x0/0x4ffc00000, data 0x17d5e64/0x18d3000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:16.644416+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331375 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:17.644759+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:18.645038+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:19.645221+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 1425408 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa5b7000/0x0/0x4ffc00000, data 0x17d5e64/0x18d3000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:20.645400+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 1318912 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:21.645559+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 1318912 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333655 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:22.645708+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106749952 unmapped: 1253376 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:23.645865+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 2367488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:24.646029+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 2367488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:25.646215+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa585000/0x0/0x4ffc00000, data 0x1809e86/0x1907000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 2383872 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:26.646390+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 2383872 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335447 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:27.646582+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 2383872 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.460929871s of 12.507403374s, submitted: 24
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:28.646759+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106029056 unmapped: 3022848 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:29.646930+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106029056 unmapped: 3022848 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa540000/0x0/0x4ffc00000, data 0x184ef2d/0x194c000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:30.647151+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 2908160 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa527000/0x0/0x4ffc00000, data 0x186811d/0x1965000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:31.647357+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 2859008 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337655 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:32.647528+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2400256 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:33.647711+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2400256 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:34.647882+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 2400256 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:35.648051+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 2179072 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:36.648202+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa4d3000/0x0/0x4ffc00000, data 0x18bbec7/0x19b9000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 2195456 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343511 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:37.648380+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 2195456 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:38.648558+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 2195456 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.962003708s of 11.394869804s, submitted: 20
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:39.648684+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 1851392 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:40.648861+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 1851392 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0x1912ebd/0x1a11000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:41.649028+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 1785856 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343393 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:42.649183+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 1785856 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:43.649336+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106758144 unmapped: 2293760 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:44.649482+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106758144 unmapped: 2293760 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:45.649634+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 1998848 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x195fd37/0x1a5d000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:46.649760+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 1753088 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348953 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:47.649933+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 1753088 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:48.650075+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 1753088 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:49.650219+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:50.650454+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:51.650757+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348953 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:52.650911+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:53.651249+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:54.651453+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:55.651676+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:56.651878+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 1712128 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348953 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.184410095s of 17.946382523s, submitted: 20
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:57.652123+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:58.652278+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:50:59.652446+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:00.652588+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:01.652737+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347205 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:02.652892+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:03.653055+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:04.653227+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:05.653383+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:06.653538+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346917 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:07.653928+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:08.654193+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:09.654393+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.624203682s of 12.761359215s, submitted: 4
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:10.654585+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:11.654713+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1969a86/0x1a66000, compress 0x0/0x0/0x0, omap 0x18e9e, meta 0x3d57162), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348719 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:12.654853+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:13.655032+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:14.655203+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa421000/0x0/0x4ffc00000, data 0x196b68b/0x1a69000, compress 0x0/0x0/0x0, omap 0x19191, meta 0x3d56e6f), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:15.655428+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _renew_subs
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 2768896 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:16.655633+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:17.655912+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:18.656178+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:19.656396+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:20.656535+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:21.656669+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:22.656869+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:23.657084+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:24.657412+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:25.659500+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:26.662074+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:27.663076+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:28.663781+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:29.664930+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:30.665700+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:31.667080+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:32.668044+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:33.668739+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:34.669148+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:35.669429+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:36.670312+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:37.670743+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:38.671025+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:39.671492+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:40.671907+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:41.672081+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:42.672299+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:43.672774+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:44.673175+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:45.673596+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:46.673922+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:47.674343+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:48.674655+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:49.674871+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:50.675196+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:51.675357+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:52.675549+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:53.675710+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:54.675862+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:55.676033+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:56.676185+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:57.676401+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:58.676558+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:51:59.676721+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:00.676847+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:01.676980+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:02.677154+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:03.677255+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:04.677420+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:05.677577+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:06.677727+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:07.677932+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:08.678177+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:09.678334+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:10.678541+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:11.678723+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2752512 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:12.679008+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:13.679169+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:14.679338+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:15.679480+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:16.679644+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:17.679902+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:18.680073+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:19.680260+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:20.680396+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:21.680513+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:22.680641+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:23.680772+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:24.680956+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:25.681120+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:26.681256+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351493 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:27.681470+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:28.681595+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 2744320 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 19
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:29.681712+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 79.399703979s of 79.442108154s, submitted: 30
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 ms_handle_reset con 0x561165781000 session 0x561163af5a40
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 2539520 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:30.681838+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 2539520 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:31.681964+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 2539520 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Got map version 20
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:32.682128+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 2498560 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'config diff' '{prefix=config diff}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:33.682246+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'config show' '{prefix=config show}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'counter dump' '{prefix=counter dump}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'counter schema' '{prefix=counter schema}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106455040 unmapped: 3645440 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:34.682352+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 3637248 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:35.682478+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'log dump' '{prefix=log dump}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 3637248 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:36.682611+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'perf dump' '{prefix=perf dump}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'perf schema' '{prefix=perf schema}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 14606336 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:37.682775+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:38.682923+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:39.683144+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:40.683261+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:41.704061+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:42.704232+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:43.704379+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:44.704517+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:45.704634+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:46.704781+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:47.704961+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:48.705078+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:49.705223+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:50.705344+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:51.705466+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:52.705596+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:53.705721+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:54.705852+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:55.705997+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:56.706149+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:57.706305+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:58.706532+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:52:59.706685+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:00.706819+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:01.706962+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:02.707092+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:03.707283+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:04.707629+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:05.708685+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:06.709341+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:07.710388+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:08.710664+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:09.710992+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:10.711164+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:11.711477+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:12.711703+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14761984 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:13.711889+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:14.712081+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:15.712367+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:16.712810+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:17.713089+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:18.713342+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:19.713629+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:20.713817+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:21.714039+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:22.714273+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:23.714439+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:24.714649+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:25.714822+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:26.715005+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:27.715221+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:28.715424+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:29.715609+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 14753792 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:30.715880+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106397696 unmapped: 14745600 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:31.716039+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106397696 unmapped: 14745600 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:32.716244+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106397696 unmapped: 14745600 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:33.716414+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106397696 unmapped: 14745600 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:34.716550+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:35.716712+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:36.716890+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:37.717120+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:38.717263+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:39.717376+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:40.717532+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:41.717729+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:42.717913+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:43.718056+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:44.718235+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:45.718452+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:46.718635+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:47.718826+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:48.718957+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:49.719129+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:50.719263+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:51.719407+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:52.719552+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:53.719686+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:54.719874+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:55.720192+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:56.720318+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:57.720487+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:58.720639+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:53:59.720775+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:00.722236+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106405888 unmapped: 14737408 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:01.722358+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:02.722488+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:03.722640+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:04.722768+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:05.722924+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:06.723330+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:07.723530+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:08.750342+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:09.750913+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:10.751268+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:11.751633+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:12.751949+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:13.752176+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:14.752355+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:15.752511+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:16.752824+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:17.753272+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:18.753454+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:19.753652+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:20.753791+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:21.753918+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:22.754059+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:23.754218+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:24.754392+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:25.754551+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:26.754740+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:27.754941+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:28.755354+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:29.755488+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:30.755657+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:31.755903+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:32.756056+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:33.756223+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:34.756429+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:35.756627+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:36.756771+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:37.756981+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:38.757135+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:39.757286+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:40.757556+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:41.757837+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:42.758147+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:43.758382+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:44.758614+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:45.758850+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:46.759006+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:47.759271+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:48.759448+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:49.759610+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:50.759780+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:51.759937+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14729216 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:52.760166+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2688 syncs, 3.77 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1978 writes, 5639 keys, 1978 commit groups, 1.0 writes per commit group, ingest: 6.38 MB, 0.01 MB/s
                                           Interval WAL: 1978 writes, 715 syncs, 2.77 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:53.760338+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:54.760552+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:55.760703+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:56.760868+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:57.761055+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:58.761204+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:54:59.761354+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:00.761479+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:01.761654+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:02.761804+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:03.761967+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:04.762157+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:05.762321+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:06.762514+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:07.762725+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:08.762863+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:09.763017+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:10.763163+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:11.763694+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:12.764240+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:13.764677+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:14.764855+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:15.765363+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:16.765823+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:17.766016+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:18.766529+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:19.766676+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:20.767138+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:21.767523+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:22.767874+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:23.767998+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:24.768299+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:25.768601+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:26.768799+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:27.769038+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:28.769187+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:29.769316+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:30.769473+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:31.769684+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:32.769885+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:33.770082+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:34.770309+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:35.770466+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:36.770663+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:37.770843+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:38.770977+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:39.771161+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:40.771321+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:41.771518+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:42.771644+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:43.771794+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:44.771928+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:45.772172+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:46.772355+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:47.772534+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:48.772665+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:49.772819+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:50.772947+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:51.773077+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:52.773246+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:53.773371+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:54.773510+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:55.773701+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:56.773851+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:57.774012+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:58.774140+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:55:59.774281+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:00.774385+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:01.774510+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:02.774674+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:03.774816+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:04.774964+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:05.775152+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:06.775287+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:07.775464+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:08.775650+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:09.775821+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:10.775957+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:11.776131+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:12.776269+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:13.776402+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:14.776595+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:15.776801+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:16.776972+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:17.777135+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:18.777262+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:19.777496+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:20.777712+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106455040 unmapped: 14688256 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:21.777865+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106455040 unmapped: 14688256 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:22.778023+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106455040 unmapped: 14688256 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:23.778160+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 234.597915649s of 234.610473633s, submitted: 144
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106455040 unmapped: 14688256 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:24.778339+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106455040 unmapped: 14688256 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:25.778464+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106455040 unmapped: 14688256 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:26.778616+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106455040 unmapped: 14688256 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:27.778804+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350845 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:28.778950+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 14819328 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:29.779208+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:30.779473+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14721024 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:31.779664+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:32.779833+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:33.780024+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:34.780181+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:35.780413+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:36.780534+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:37.780831+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 14712832 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:38.781021+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:39.781425+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:40.781784+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:41.781968+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:42.782246+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:43.782457+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:44.782652+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:45.782839+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:46.783077+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:47.783406+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 14704640 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:48.783622+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:49.783762+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:50.783907+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:51.784135+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:52.784320+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:53.784504+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:54.784662+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:55.784757+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:56.784898+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:57.785086+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:58.785272+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:56:59.785466+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:00.785636+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:01.785797+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:02.785959+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:03.786047+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:04.786177+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:05.786312+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:06.786441+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 14696448 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:07.786587+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 14680064 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:08.786759+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 14680064 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:09.786878+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 14680064 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:10.787017+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 14680064 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:11.787188+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 14680064 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:12.787360+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 14680064 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:13.787518+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 14680064 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:14.787702+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 14680064 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:15.787867+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:16.788067+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:17.788267+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:18.788472+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:19.788728+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:20.789367+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:21.790063+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:22.790339+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:23.790752+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:24.791150+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:25.791548+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 14663680 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:26.791876+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:27.792055+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:28.792338+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:29.792591+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:30.792839+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:31.793071+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:32.793388+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:33.793574+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:34.793756+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:35.793896+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:36.794038+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:37.794178+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:38.794455+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:39.794622+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:40.794783+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:41.795079+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:42.795331+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:43.795456+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:44.795572+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:45.795709+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:46.795879+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 14647296 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:47.796088+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:48.796310+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:49.796462+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:50.796614+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:51.796779+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:52.796867+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:53.796990+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:54.797138+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:55.797271+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:56.797403+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:57.797542+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:58.797657+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:57:59.797754+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:00.797906+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:01.798185+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:02.798339+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:03.798528+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:04.798750+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:05.798922+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:06.799189+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 14630912 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:07.799391+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:08.799608+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:09.799749+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:10.799974+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:11.800139+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:12.800281+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:13.800425+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:14.800671+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:15.800826+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:16.800984+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:17.801200+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:18.801359+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:19.801545+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:20.801667+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:21.801789+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:22.801963+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:23.802148+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:24.802495+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:25.802965+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:26.803290+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 14614528 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:27.803450+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:28.804077+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:29.804403+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:30.804622+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:31.804782+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:32.805230+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:33.805649+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:34.805961+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:35.806385+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:36.806673+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:37.806865+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:38.807161+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:39.807435+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:40.807603+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:41.807872+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:42.808133+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:43.808318+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:44.808585+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:45.808886+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:46.809033+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 14598144 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:47.809177+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:48.809488+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:49.809747+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:50.809928+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:51.810094+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:52.810262+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:53.810399+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:54.810536+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:55.810659+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:56.810766+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:57.810959+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:58.811094+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:58:59.811251+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:00.811394+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:01.811524+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:02.811673+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:03.811809+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:04.811991+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:05.812161+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:06.812300+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:07.812455+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:08.812606+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:09.812752+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:10.812879+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:11.813031+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:12.813172+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:13.813297+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:14.813452+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:15.813553+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:16.813724+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 14589952 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:17.813968+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:18.814163+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 14573568 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:19.814307+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:20.814477+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:21.814615+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:22.814762+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:23.814914+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:24.815134+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:25.815294+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:26.815502+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:27.815688+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:28.815900+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:29.816592+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:30.817405+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:31.817554+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:32.818011+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:33.818247+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:34.818474+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:35.818707+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:36.819031+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:37.819264+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:38.819392+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:39.819535+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:40.819760+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 14565376 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:41.819947+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:42.820127+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:43.820271+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:44.820421+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:45.820586+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:46.820832+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:47.821069+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:48.821324+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:49.821501+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:50.821746+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:51.821983+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:52.822218+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 14557184 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets getting new tickets!
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:53.822654+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _finish_auth 0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:53.824248+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:54.822787+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:55.822960+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:56.823238+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:57.823507+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:58.823697+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T10:59:59.823841+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:00.823976+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:01.824182+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:02.824351+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:03.824558+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:04.824722+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:05.824868+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:06.825056+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 14540800 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc ms_handle_reset ms_handle_reset con 0x561165735000
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: get_auth_request con 0x561165728000 auth_method 0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: mgrc handle_mgr_configure stats_period=5
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:07.825347+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106766336 unmapped: 14376960 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:08.825507+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106766336 unmapped: 14376960 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:09.825682+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106766336 unmapped: 14376960 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:10.825858+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106766336 unmapped: 14376960 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:11.826065+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106766336 unmapped: 14376960 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:12.826225+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106766336 unmapped: 14376960 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 ms_handle_reset con 0x561165780400 session 0x5611655fda40
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: handle_auth_request added challenge on 0x561162552800
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:13.826346+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:14.826540+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:15.826671+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:16.826883+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:17.827067+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:18.827218+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:19.827362+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:20.827556+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:21.827707+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:22.827892+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:23.828086+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:24.828262+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:25.828443+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:26.828641+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:27.828890+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:28.829053+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:29.829233+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:30.829429+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:31.829605+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:32.829756+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:33.829909+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:34.830039+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:35.830153+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:36.830320+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:37.830521+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:38.830694+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:39.830877+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:40.831028+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:41.831402+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:42.831547+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:43.831673+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:44.831824+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:45.832180+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:46.832343+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:47.832512+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:48.832657+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:49.832794+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:50.832938+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:51.833070+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:52.833338+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:53.833551+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:54.833799+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:55.834035+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:56.834158+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:57.834379+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:58.834581+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:00:59.834713+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:00.834893+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:01.835056+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:02.835205+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:03.835329+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:04.835465+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:05.835779+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:06.835898+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 14245888 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:07.836137+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:08.836349+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:09.836508+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:10.836633+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:11.836802+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:12.837048+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:13.837278+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:14.837483+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:15.837670+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:16.837836+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:17.838028+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:18.838303+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:19.838483+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:20.838625+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:21.838763+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:22.838913+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 14237696 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:23.839039+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 297.511810303s of 299.586944580s, submitted: 106
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 14172160 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:24.839178+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 14172160 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:25.839334+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:26.839492+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:27.839645+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:28.839778+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:29.839901+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:30.840016+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:31.840141+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:32.856635+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:33.856831+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 04 11:02:09 compute-0 ceph-osd[86021]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 04 11:02:09 compute-0 ceph-osd[86021]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350773 data_alloc: 218103808 data_used: 5011
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:34.856961+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:35.857081+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 14163968 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:36.857212+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'config diff' '{prefix=config diff}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'config show' '{prefix=config show}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'counter dump' '{prefix=counter dump}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'counter schema' '{prefix=counter schema}'
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 14221312 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:37.857344+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 14262272 heap: 121143296 old mem: 2845415832 new mem: 2845415832
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: tick
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_tickets
Dec 04 11:02:09 compute-0 ceph-osd[86021]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-04T11:01:38.857484+0000)
Dec 04 11:02:09 compute-0 ceph-osd[86021]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x196d10a/0x1a6c000, compress 0x0/0x0/0x0, omap 0x19479, meta 0x3d56b87), peers [1,2] op hist [])
Dec 04 11:02:09 compute-0 ceph-osd[86021]: do_command 'log dump' '{prefix=log dump}'
Dec 04 11:02:10 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 04 11:02:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14972 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14970 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:10 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/4188323657' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 04 11:02:10 compute-0 ceph-mon[75358]: from='client.14966 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:10 compute-0 ceph-mon[75358]: from='client.14968 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:10 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:10 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14974 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:10 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec 04 11:02:10 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 11:02:11 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14978 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec 04 11:02:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: from='client.14972 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: from='client.14970 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: pgmap v1588: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:11 compute-0 ceph-mon[75358]: from='client.14974 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.240680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764846131240751, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1647, "num_deletes": 251, "total_data_size": 2623826, "memory_usage": 2671856, "flush_reason": "Manual Compaction"}
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764846131260605, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2576534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34126, "largest_seqno": 35772, "table_properties": {"data_size": 2568758, "index_size": 4654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16763, "raw_average_key_size": 20, "raw_value_size": 2552962, "raw_average_value_size": 3117, "num_data_blocks": 207, "num_entries": 819, "num_filter_entries": 819, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845964, "oldest_key_time": 1764845964, "file_creation_time": 1764846131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 19965 microseconds, and 5737 cpu microseconds.
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.260643) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2576534 bytes OK
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.260668) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.263860) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.263880) EVENT_LOG_v1 {"time_micros": 1764846131263873, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.263902) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2616510, prev total WAL file size 2616510, number of live WAL files 2.
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.264746) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2516KB)], [74(9223KB)]
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764846131264799, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12021240, "oldest_snapshot_seqno": -1}
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6552 keys, 10252883 bytes, temperature: kUnknown
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764846131330492, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10252883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10209102, "index_size": 26325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 166097, "raw_average_key_size": 25, "raw_value_size": 10091739, "raw_average_value_size": 1540, "num_data_blocks": 1066, "num_entries": 6552, "num_filter_entries": 6552, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764846131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.330753) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10252883 bytes
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.332819) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.7 rd, 155.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 9.0 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 7066, records dropped: 514 output_compression: NoCompression
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.332835) EVENT_LOG_v1 {"time_micros": 1764846131332827, "job": 42, "event": "compaction_finished", "compaction_time_micros": 65789, "compaction_time_cpu_micros": 21443, "output_level": 6, "num_output_files": 1, "total_output_size": 10252883, "num_input_records": 7066, "num_output_records": 6552, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764846131333323, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764846131335080, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.264660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.335125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.335129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.335131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.335133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 11:02:11 compute-0 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-11:02:11.335135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 04 11:02:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 04 11:02:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613943205' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 04 11:02:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2202805450' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 04 11:02:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2202805450' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 11:02:11 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14982 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:11 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 04 11:02:11 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2890516360' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 04 11:02:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14990 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:12 compute-0 systemd[1]: Starting Hostname Service...
Dec 04 11:02:12 compute-0 ceph-mon[75358]: from='client.14978 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3613943205' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 04 11:02:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2202805450' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 04 11:02:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.10:0/2202805450' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 04 11:02:12 compute-0 ceph-mon[75358]: from='client.14982 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:12 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2890516360' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 04 11:02:12 compute-0 systemd[1]: Started Hostname Service.
Dec 04 11:02:12 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14994 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:12 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:12 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 04 11:02:12 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2042269593' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 04 11:02:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357864711' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 04 11:02:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 04 11:02:13 compute-0 ceph-mon[75358]: from='client.14990 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: from='client.14994 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: pgmap v1589: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:13 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2042269593' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3357864711' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 04 11:02:13 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 04 11:02:13 compute-0 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 04 11:02:13 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 04 11:02:13 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340386145' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 04 11:02:14 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.15008 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:14 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3340386145' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 04 11:02:14 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 04 11:02:14 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492598760' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 04 11:02:14 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:02:14 compute-0 podman[278084]: 2025-12-04 11:02:14.993307255 +0000 UTC m=+0.094869752 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 04 11:02:15 compute-0 podman[278083]: 2025-12-04 11:02:15.005642598 +0000 UTC m=+0.107047581 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 04 11:02:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Dec 04 11:02:15 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852748258' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 04 11:02:15 compute-0 ceph-mon[75358]: from='client.15008 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:15 compute-0 ceph-mon[75358]: pgmap v1590: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:15 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/492598760' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 04 11:02:15 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3852748258' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 04 11:02:15 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 04 11:02:15 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1126005607' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 04 11:02:16 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 04 11:02:16 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3599444414' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 04 11:02:16 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:16 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/1126005607' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 04 11:02:16 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3599444414' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 04 11:02:16 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.15018 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:17 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 04 11:02:17 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633029235' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 04 11:02:17 compute-0 ceph-mon[75358]: pgmap v1591: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:17 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/3633029235' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 04 11:02:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 04 11:02:18 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141218058' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 04 11:02:18 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.15024 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:18 compute-0 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:18 compute-0 ceph-mon[75358]: from='client.15018 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:18 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/2141218058' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 04 11:02:18 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 04 11:02:18 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198638230' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 04 11:02:19 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.15028 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:19 compute-0 ceph-mon[75358]: from='client.15024 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:19 compute-0 ceph-mon[75358]: pgmap v1592: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec 04 11:02:19 compute-0 ceph-mon[75358]: from='client.? 192.168.122.100:0/198638230' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 04 11:02:19 compute-0 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 04 11:02:19 compute-0 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.15030 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 04 11:02:20 compute-0 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec 04 11:02:20 compute-0 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773403110' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
